Skip to content

Failed to run Mongoose #79

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
AlexeyFonfrygin opened this issue Oct 29, 2020 · 19 comments
Open

Failed to run Mongoose #79

AlexeyFonfrygin opened this issue Oct 29, 2020 · 19 comments
Assignees

Comments

@AlexeyFonfrygin
Copy link

Hello! I have Pravega cluster that is installed on a Kubernetes cluster
When running this command: docker run --network host emcmongoose/mongoose-storage-driver-pravega --storage-namespace=scope --storage-net-node-addrs=10.233.36.160:9090
I took the ip-address and port from the Pravega controller:
image

The following errors appeared:
image

But if the fail count does not exceed the default value, then Mongoose continues to execute:
image

What did I do wrong?

@dadlex
Copy link
Member

dadlex commented Oct 29, 2020

Hi there! Please specify ip and port separately. You can see the list of common mongoose parameters here: https://github.com/emc-mongoose/mongoose-base/tree/master/doc/usage/input/configuration. And Pravega specific here: https://github.com/emc-mongoose/mongoose-storage-driver-pravega. In this case the solution is:
--storage-net-node-addrs=10.233.36.160 --storage-net-node-port=9090
The port option can be omitted as it's a default.

@dadlex
Copy link
Member

dadlex commented Oct 29, 2020

The only surprising thing is that you actually got some successful operations 0_o. This isn't expected. If you could share the logs, that would be cool. The path for the logs is /root/.mongoose/4.2.18/logs/linear_20201029.1516622.956. The log files of interest are 3rdparty.log, errors.log and messages.log. This exercise won't be exactly useful for you, but can help make Mongoose better.

@AlexeyFonfrygin
Copy link
Author

I have specified ip and port separately, but this problem remains:
image
Logs about it:
3rdparty.log
errors.log
messages.log

Docker container with some successful operations has been removed and now cannot get a variant with this result. Although this happened more than once before.

@dadlex
Copy link
Member

dadlex commented Oct 30, 2020

What's your pravega version?

@AlexeyFonfrygin
Copy link
Author

0.7.0

@dadlex
Copy link
Member

dadlex commented Oct 30, 2020

Well, that's interesting then. Let's go deeper. Run mongoose for 2-3 minutes and then please do:

kubectl logs  pravega-pravega-controller > controller.log
kubectl logs  pravega-pravega-segmentstore-headless > ss.log

But before you do that, a silly question: I assume you deployed mongoose in some other namespace, right? Because I don't see it listed in the default one.

@dadlex dadlex self-assigned this Oct 30, 2020
@AlexeyFonfrygin
Copy link
Author

No, I didn't deploy mongoose. Apparently I missed it 😢

@dadlex
Copy link
Member

dadlex commented Oct 30, 2020

Oh, that changes the case then. So, there are two ways you can launch mongoose: inside and outside kubernetes. If you create mongoose as a pod (e.g. via helm https://github.com/emc-mongoose/mongoose-helm-charts) you can use internal controller ip. But if you want to apply workload from outside the cluster you cannot use internal ip, cause it's internal:) you need to have some external ip, you can learn more here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0. One more thing to notice though: if it's a testing minikube setup, you might run into issues, because as far as I know from other people minikube isn't exactly good at providing external ips. You can still try though. But I'd start with simply starting mongoose inside k8s.

@AlexeyFonfrygin
Copy link
Author

Thanks a lot for the explanation! But I don't quite understand how I should use this ip address. Will it be used as an additional argument to run the docker container or is it something else?

@dadlex
Copy link
Member

dadlex commented Nov 2, 2020

If we talk about running mongoose inside the k8s cluster, we can pass the ip of the pravega controller when starting mongoose via helm.

helm install mongoose emc-mongoose/mongoose \
             --set image.name=emcmongoose/mongoose-storage-driver-pravega \
             --set "args=\"--storage-net-node-addrs=<x.y.z.j>\"\,
             ...

@AlexeyFonfrygin
Copy link
Author

I meant a little different. I deployed the mongoose service using this command

helm install mongoose emc-mongoose/mongoose-service --set "image.name=emcmongoose/mongoose-storage-driver-pravega" --set "args=\"--storage-net-node- addrs=10.233.36.160\"\,\"--storage-namespace=scope4\""

and got the external ip from it
image

Then I added this ip to darzee and tried to run the default scenario, but got the same error as before
image

How to run mongoose scenarios in this case?

@dadlex
Copy link
Member

dadlex commented Nov 5, 2020

I'm not sure why you are going for more difficult deployments when you could start with a simpler one. You definitely don't need the darzee to do a simple run.

@dadlex
Copy link
Member

dadlex commented Nov 5, 2020

         --set "args=\"--storage-net-node-addrs=<COntrollerIP>\"\, \
                \"--storage-net-node-port=9090\"\,  \
                \"--load-batch-size=1000\"\, \
                \"--storage-driver-scaling-segments=10\"\, \
                \"--storage-namespace=scope1\"\, \
                \"--item-output-path=stream6\"\,  \
                \"--storage-driver-threads=10\"\,  \
                \"--storage-driver-limit-queue-input=10000\"\, \
                \"--storage-driver-limit-concurrency=0\"\, \
                \"--item-data-size=100KB\""  \
         --set image.name=emcmongoose/mongoose-storage-driver-pravega  \
         --set image.tag=latest 

Please try running this inside you k8s cluster and share the results. Don't forget to substitute the controller ip with actual value of internal ip of the pravega controller.

@AlexeyFonfrygin
Copy link
Author

I ran this and used this command to configure the external IP because it was pending infinitely

kubectl patch svc mongoose-entry-node-svc -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.228.3.185"]}}'

image

How do i start testing?

@veronikaKochugova veronikaKochugova self-assigned this Nov 6, 2020
@veronikaKochugova
Copy link
Member

veronikaKochugova commented Nov 6, 2020

@fonfrygin It's not a best practice to patch IP yourself. "Pending" means that the k8s cluster can't allocate free IP to service. Are you sure that the IP that you gave him will work?
You can check this, for example, by following the link http://10.228.3.185:9999/config :. If the ip is correct and the mongoose is running (not a scenario), you will see the default config.
In case the IP address is "Pending", you can use another type of service. https://github.com/emc-mongoose/mongoose-helm-charts#custom-service-type

It's cool that you use darzy, but @dadlex is right and you can start with an easier way of deploying.
If we are talking about running MG on k8s (helm), then there are 2 common options for it:

  1. mongoose chart
    This will run test scenario at the same moment when you deployed the mongoose with command that Alex suggested Failed to run Mongoose #79 (comment)

  2. or mongoose-service chart
    This will deploy mongoose nodes in standby mode, but scenario needs to be run separately (curl request or darzey). More about REST API: https://github.com/emc-mongoose/mongoose-base/tree/master/doc/usage/api/remote#4221-standalone-mode
    You have used this case. How to start a testing? so take a look at the attached REST API documentation above and send request on mongoose-entry-node-svc External IP.

Please note that @dadlex sent the cmd exactly with mongoose chart, not like yours cmd with mongoose-service chart.
You can use any method, but you need to understand the differences.

@veronikaKochugova
Copy link
Member

Did you use minikube or real k8s cluster?

@veronikaKochugova
Copy link
Member

veronikaKochugova commented Nov 6, 2020

You can also deploy mongoose outside the cluster, just in the docker (as at the beginning of thread), but you will need to forward the pravega-controller's ip outside. This can be done using a port-forward or proxy for example.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/

@AlexeyFonfrygin
Copy link
Author

Thank you for help! My problem was that Pravega was not working. I reinstalled it and now mongoose starts up, but I have a few questions. Why isn't stream being created in the first step?
image
Logs: 3rdparty.log

And do I understand correctly that this amount of current concurrency means the number of tasks that are currently running?

My command to run Mongoose:
docker run --network host emcmongoose/mongoose-storage-driver-pravega --load-op-limit-count=100000 --storage-driver-threads=1 --storage-driver-limit-concurrency=0 --item-data-size=10KB --storage-namespace=scope1 --storage-net-node-addrs=10.233.58.27

And I also ran into the problem that after several launches of Mongoose, my Pravega controller crashes(0/1 Running) with the following logs: pravega_logs.txt
Why can this happen?

@dadlex
Copy link
Member

dadlex commented Nov 23, 2020

  1. On the stream creating issue - it was created otherwise you wouldn't get any throughput. The thing is that each request has a timeout. So we failed with the timeout, but at the time when we start doing the load, Pravega has already handled the request and stream is there. You can tweak timeout options, they are described in readme in the config part.
  2. Yes, it's a number of current operations
  3. What is your cluster config? Did you use the default settings for pravega components? If so, the default values for bookkeeper are usually too low to handle unthrottled run, first thing I'd do is increasing resources for pravega or limiting mongoose's rate via --load-op-limit-rate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants