Question

I am writing a socket.io based server and I'm trying to avoid the pyramid of doom and to keep the memory low. I wrote this client - http://jsfiddle.net/QUDXU/1/ which i run with node client-cluster 1000. So 1000 connections that are making continuous requests.

For the server side a tried 3 different solutions which i tested. The results in terms of RAM used by the server, after i let everything run for an hour are:

  1. Simple callbacks - http://jsfiddle.net/DcWmJ/ - 112MB
  2. Q module - http://jsfiddle.net/hhsja/1/ - 850MB and increasing
  3. Async module - http://jsfiddle.net/SgemT/ - 1.2GB and increasing

The server and clients are on different machines. (Softlayer cloud instances). Node 0.10.12 and Socket.io 0.9.16

Why is this happening? How can I keep the memory low and use some kind of library which allows to keep the code readable?

Was it helpful?

Solution 2

It seems like the problem was on the client script, not on the server one. I ran 1000 processes, each of them emitting messages to the server at every second. I think the server was getting very busy resolving all of those requests and thus using all of that memory. I rewrote the client side like this, spawning a number of processes proportional to the number of processors, each of them connecting multiple times like this:

client = io.connect(selectedEnvironment, { 'force new connection': true, 'reconnect': false });

Notice the 'force new connection' flag that allows to connect multiple clients using the same instance of socket.io-client. The part that solved my problem was actually how the requests were made: any client would make another request after a second from receiving the acknowledge of the previous request, not at every second. Connecting 1000 clients is making my server using ~100MB RSS. I also used async on the server script which seems very elegant and easier to understand than Q. The bad part is that I've been running the server for about 2-3 days and the memory rised at 250MB RSS. This, I don't know why.

OTHER TIPS

Option 1. You can use the cluster module and gracefully kill your workers from time to time (make sure you disconnect() first). You can check process.memoryUsage().rss > 130000000 in the master and kill the workers when they exceed 130MB, for example :)

Option 2. NodeJS has the habit of using memory and rarely doing rigorous cleanups. As V8 reaches the maximum memory limit, GC calls are more aggressive. So you could lower the maximum memory a node process can take up by running node --max-stack-size <amount>. I do this when running node on embedded devices (often with less than 64 MB of ram available).

Option 3. If you really want to keep the memory low, use weak references where it is possible (anywhere except in long-running calls) https://github.com/TooTallNate/node-weak . This way, the objects will get garbage collected sooner. Extensive tests to make sure everything works are needed, though. GL if u use this one :) https://github.com/TooTallNate/node-weak

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top