Loadbalancing web sockets

WebsocketLoad BalancingServer Push

Websocket Problem Overview


I have a question about how to load balance web sockets.

I have a server which supports web sockets. Browsers connect to my site and each one opens a web socket to www.mydomain.com. That way, my social network app can push messages to the clients.

Traditionally, using just HTTP requests, I would scale up by adding a second server and a load balancer in front of the two web servers.

With web sockets, the connection has to be directly with the web server, not the load balancers, because if a machine has a physical limit of say 64k open ports, and the clients were connecting to the load balancer, then I couldn't support more than 64k concurrent users.

So how do I -

  1. get the client to connect directly to the web server (rather than the load balancer) when the page loads? Do I simply load the JavaScript from a node, and the load balancers (or whatever) randomly modifies the URL for the script, every time the page is initially requested?

  2. handle a ripple start? The browser will notice that the connection is closed as the web server shuts down. I can write JavaScript code to attempt to reopen the connection, but the node will be gone for a while. So I guess I would have to go back to the load balancer to query the address of the next node to use?

  3. I did wonder about the load balancers sending a redirect on the initial request, so that the browser initially requests www.mydomain.com and gets redirected to www34.mydomain.com. That works quite well, until the node goes down - and sites like Facebook don't do that. How do they do it?

Websocket Solutions


Solution 1 - Websocket

Put a L3 load-balancer that distributes IP packets based on source-IP-port hash to your WebSocket server farm. Since the L3 balancer maintains no state (using hashed source-IP-port) it will scale to wire speed on low-end hardware (say 10GbE). Since the distribution is deterministic (using hashed source-IP-port), it will work with TCP (and hence WebSocket).

Also note that a 64k hard limit only applies to outgoing TCP/IP for a given (source) IP address. It does not apply to incoming TCP/IP. We have tested Autobahn (a high-performance WebSocket server) with 200k active connections on a 2 core, 4GB RAM VM.

Also note that you can do L7 load-balancing on the HTTP path announced during the initial WebSocket handshake. In that case the load balancer has to maintain state (which source IP-port pair is going to which backend node). It will probably scale to millions of connections nevertheless on decent setup.

Disclaimer: I am original author of Autobahn and work for Tavendo.

Solution 2 - Websocket

Note that if your websocket server logic runs on nodejs with socket.io, you can tell socket.io to use a shared redis key/value store for synchronization. This way you don't even have to care about the load balancer, events will propagate among the server instances.

var io = require('socket.io')(3000);
var redis = require('socket.io-redis'); 
io.adapter(redis({ host: 'localhost', port: 6379 }));

See: Socket IO - Using multiple nodes

But at some point I guess redis can become the bottleneck...

Solution 3 - Websocket

You can also achieve layer 7 load balancing with inspection and "routing functionality"

See "How to inspect and load-balance WebSockets traffic using Stingray Traffic Manager, and when necessary, how to manage WebSockets and HTTP traffic that is received on the same IP address and port." https://splash.riverbed.com/docs/DOC-1451

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionJohn SmithView Question on Stackoverflow
Solution 1 - WebsocketoberstetView Answer on Stackoverflow
Solution 2 - WebsocketConvolverView Answer on Stackoverflow
Solution 3 - WebsocketDavidView Answer on Stackoverflow