Using multiple nodes

    • enabling sticky session, if HTTP long-polling is enabled (which is the default): see below
    • using the Redis adapter (or another compatible ): see below

    If you plan to distribute the load of connections among different processes or machines, you have to make sure that all requests associated with a particular session ID reach the process that originated them.

    This is because the HTTP long-polling transport sends multiple HTTP requests during the lifetime of the Socket.IO session.

    In fact, Socket.IO could technically work without sticky sessions, with the following synchronization (in dashed lines):

    While obviously possible to implement, we think that this synchronization process between the Socket.IO servers would result in a big performance hit for your application.

    Remarks:

    • without enabling sticky-session, you will experience HTTP 400 errors due to “Session ID unknown”
    • the WebSocket transport does not have this limitation, since it relies on a single TCP connection for the whole session. Which means that if you disable the HTTP long-polling transport (which is a perfectly valid choice in 2021), you won’t need sticky sessions:

    Documentation:

    Enabling sticky-session

    To achieve sticky-session, there are two main solutions:

    • routing clients based on their originating address

    You will find below some examples with common load-balancing solutions:

    For other platforms, please refer to the relevant documentation:

    Important note: if you are in a CORS situation (the front domain is different from the server domain) and session affinity is achieved with a cookie, you need to allow credentials:

    1. const io = require("socket.io")(httpServer, {
      cors: {
      origin: "https://front-domain.com",
      methods: ["GET", "POST"],
      credentials: true
      }
      });

    Client

    1. const io = require("socket.io-client");
      const socket = io("https://server-domain.com", {
      withCredentials: true
      });

    Without it, the cookie will not be sent by the browser and you will experience HTTP 400 “Session ID unknown” responses. More information here.

    NginX configuration

    Within the http { } section of your nginx.conf file, you can declare a upstream section with a list of Socket.IO process you want to balance load between:

    Notice the hash instruction that indicates the connections will be sticky.

    Make sure you also configure in the topmost level to indicate how many workers NginX should use. You might also want to look into tweaking the worker_connections setting within the events { } block.

    Links:

    1. Header add Set-Cookie "SERVERID=sticky.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED

      <Proxy "balancer://nodes_polling">
      BalancerMember "http://app01:3000" route=app01
      BalancerMember "http://app02:3000" route=app02
      BalancerMember "http://app03:3000" route=app03
      ProxySet stickysession=SERVERID
      </Proxy>

      <Proxy "balancer://nodes_ws">
      BalancerMember "ws://app01:3000" route=app01
      BalancerMember "ws://app02:3000" route=app02
      BalancerMember "ws://app03:3000" route=app03
      ProxySet stickysession=SERVERID
      </Proxy>

      RewriteEngine On
      RewriteCond %{HTTP:Upgrade} =websocket [NC]
      RewriteRule /(.*) balancer://nodes_ws/$1 [P,L]
      RewriteCond %{HTTP:Upgrade} !=websocket [NC]
      RewriteRule /(.*) balancer://nodes_polling/$1 [P,L]

      ProxyTimeout 3

    Links:

    HAProxy configuration

    1. # Reference: http://blog.haproxy.com/2012/11/07/websockets-load-balancing-with-haproxy/

      listen chat
      bind *:80
      default_backend nodes

      backend nodes
      option httpchk HEAD /health
      http-check expect status 200
      cookie io prefix indirect nocache # using the `io` cookie set upon handshake
      server app01 app01:3000 check cookie app01
      server app02 app02:3000 check cookie app02
      server app03 app03:3000 check cookie app03

    Links:

    Traefik

    Using container labels:

    With the :

    1. ## Dynamic configuration
      http:
      services:
      my-service:
      rule: "PathPrefix(`/`)"
      loadBalancer:
      sticky:
      cookie:
      name: server_id
      httpOnly: true

    Links:

    There are several solutions, depending on your use case:

    Example with @socket.io/sticky:

    1. const cluster = require("cluster");
      const http = require("http");
      const { Server } = require("socket.io");
      const redisAdapter = require("socket.io-redis");
      const numCPUs = require("os").cpus().length;
      const { setupMaster, setupWorker } = require("@socket.io/sticky");

      if (cluster.isMaster) {
      console.log(`Master ${process.pid} is running`);

      const httpServer = http.createServer();
      setupMaster(httpServer, {
      loadBalancingMethod: "least-connection", // either "random", "round-robin" or "least-connection"
      });
      httpServer.listen(3000);

      for (let i = 0; i < numCPUs; i++) {
      cluster.fork();
      }

      cluster.on("exit", (worker) => {
      console.log(`Worker ${worker.process.pid} died`);
      cluster.fork();
      });
      } else {
      console.log(`Worker ${process.pid} started`);

      const httpServer = http.createServer();
      const io = new Server(httpServer);
      io.adapter(redisAdapter({ host: "localhost", port: 6379 }));
      setupWorker(io);

      io.on("connection", (socket) => {
      /* ... */
      });
      }

    Passing events between nodes

    The Redis adapter

    Now that you have multiple Socket.IO nodes accepting connections, if you want to broadcast events to all clients (or to the clients in a certain room) you’ll need some way of passing messages between processes or computers.

    The interface in charge of routing messages is what we call the . You can implement your own on top of the socket.io-adapter (by inheriting from it) or you can use the one we provide on top of : socket.io-redis:

    Then the following call:

    1. io.emit("hi", "all sockets");

    will be broadcast to every clients through the of Redis:

    Broadcasting with Redis

    Sending messages from the outside world

    Using the Redis adapter has another benefit: you can now emit events from outside the context of your Socket.IO processes.

    This emitter is available in several languages:

    Caught a mistake? Edit this page on