1. By default, haproxy tries to spread the start of health checks across the
    2. smallest health check interval of all the servers in a farm. The principle is
    3. to avoid hammering services running on the same server. But when using large
    4. check intervals (10 seconds or more), the last servers in the farm take some
    5. time before starting to be tested, which can be a problem. This parameter is
    6. used to enforce an upper bound on delay between the first and the last check,
    7. even if the servers' check intervals are larger. When servers run with
    8. shorter intervals, their intervals will be respected though.

    maxconn

    1. Sets the maximum per-process number of concurrent connections to <number>. It
    2. is equivalent to the command-line argument "-n". Proxies will stop accepting
    3. connections when this limit is reached. The "" parameter is
    4. automatically adjusted according to this value. See also "ulimit-n". Note:
    5. the "select" poller cannot reliably use more than 1024 file descriptors on
    6. some platforms. If your platform only supports select and reports "select
    7. FAILED" on startup, you need to reduce maxconn until it works (slightly
    8. below 500 in general). If this value is not set, it will automatically be
    9. calculated based on the current file descriptors limit reported by the
    10. "ulimit -n" command, possibly reduced to a lower value if a memory limit
    11. is enforced, based on the buffer size, memory allocated to compression, SSL
    12. cache size, and use or not of SSL and the associated maxsslconn (which can
    13. also be automatic).

    1. Sets the maximum per-process number of connections per second to <number>.
    2. Proxies will stop accepting connections when this limit is reached. It can be
    3. used to limit the global capacity regardless of each frontend capacity. It is
    4. important to note that this can only be used as a service protection measure,
    5. as there will not necessarily be a fair share between frontends when the
    6. limit is reached, so it's a good idea to also limit each frontend to some
    7. value close to its expected share. Also, lowering tune.maxaccept can improve
    8. fairness.

    maxcomprate

    1. Sets the maximum per-process input compression rate to <number> kilobytes
    2. per second. For each session, if the maximum is reached, the compression
    3. level will be decreased during the session. If the maximum is reached at the
    4. beginning of a session, the session will not compress at all. If the maximum
    5. is not reached, the compression level will be increased up to
    6. tune.comp.maxlevel. A value of zero means there is no limit, this is the
    7. default value.

    1. Sets the maximum CPU usage HAProxy can reach before stopping the compression
    2. for new requests or decreasing the compression level of current requests.
    3. It works like 'maxcomprate' but measures CPU usage instead of incoming data
    4. bandwidth. The value is expressed in percent of the CPU used by haproxy. In
    5. case of multiple processes (nbproc > 1), each process manages its individual
    6. usage. A value of 100 disable the limit. The default value is 100. Setting
    7. a lower value will prevent the compression work from slowing the whole
    8. process down and from introducing high latencies.

    maxpipes

    1. Sets the maximum per-process number of pipes to <number>. Currently, pipes
    2. are only used by kernel-based tcp splicing. Since a pipe contains two file
    3. descriptors, the "" value will be increased accordingly. The default
    4. value is maxconn/4, which seems to be more than enough for most heavy usages.
    5. The splice code dynamically allocates and releases pipes, and can fall back
    6. to standard copy, so setting this value too low may only impact performance.

    maxsessrate

    1. Sets the maximum per-process number of sessions per second to <number>.
    2. Proxies will stop accepting connections when this limit is reached. It can be
    3. used to limit the global capacity regardless of each frontend capacity. It is
    4. important to note that this can only be used as a service protection measure,
    5. as there will not necessarily be a fair share between frontends when the
    6. limit is reached, so it's a good idea to also limit each frontend to some
    7. value close to its expected share. Also, lowering tune.maxaccept can improve
    8. fairness.

    1. Sets the maximum per-process number of concurrent SSL connections to
    2. <number>. By default there is no SSL-specific limit, which means that the
    3. global maxconn setting will apply to all connections. Setting this limit
    4. avoids having openssl use too much memory and crash when malloc returns NULL
    5. (since it unfortunately does not reliably check for such conditions). Note
    6. that the limit applies both to incoming and outgoing connections, so one
    7. connection which is deciphered then ciphered accounts for 2 SSL connections.
    8. If this value is not set, but a memory limit is enforced, this value will be
    9. automatically computed based on the memory limit, maxconn, the buffer size,
    10. memory allocated to compression, SSL cache size, and use of SSL in either
    11. frontends, backends or both. If neither maxconn nor maxsslconn are specified
    12. when there is a memory limit, haproxy will automatically adjust these values
    13. so that 100% of the connections can be made over SSL with no risk, and will
    14. consider the sides where it is enabled (frontend, backend, both).

    maxsslrate

    1. Sets the maximum per-process number of SSL sessions per second to <number>.
    2. SSL listeners will stop accepting connections when this limit is reached. It
    3. can be used to limit the global SSL CPU usage regardless of each frontend
    4. capacity. It is important to note that this can only be used as a service
    5. protection measure, as there will not necessarily be a fair share between
    6. frontends when the limit is reached, so it's a good idea to also limit each
    7. frontend to some value close to its expected share. It is also important to
    8. note that the sessions are accounted before they enter the SSL stack and not
    9. after, which also protects the stack against bad handshakes. Also, lowering
    10. tune.maxaccept can improve fairness.

    1. Sets the maximum amount of RAM in megabytes per process usable by the zlib.
    2. When the maximum amount is reached, future sessions will not compress as long
    3. as RAM is unavailable. When sets to 0, there is no limit.
    4. The default value is 0. The value is available in bytes on the UNIX socket
    5. with "show info" on the line "MaxZlibMemUsage", the memory used by zlib is
    6. "ZlibMemUsage" in bytes.

    noepoll

    1. Disables the use of the "epoll" event polling system on Linux. It is
    2. equivalent to the command-line argument "-de". The next polling system
    3. used will generally be "poll". See also "".

    nokqueue

    1. Disables the use of the "kqueue" event polling system on BSD. It is
    2. equivalent to the command-line argument "-dk". The next polling system
    3. used will generally be "poll". See also "".

    noevports

    1. Disables the use of the event ports event polling system on SunOS systems
    2. derived from Solaris 10 and later. It is equivalent to the command-line
    3. argument "-dv". The next polling system used will generally be "poll". See
    4. also "".

    nopoll

    1. Disables the use of the "poll" event polling system. It is equivalent to the
    2. command-line argument "-dp". The next polling system used will be "select".
    3. It should never be needed to disable "poll" since it's available on all
    4. platforms supported by HAProxy. See also "", "noepoll" and
    5. "".

    nosplice

    1. Disables the use of kernel tcp splicing between sockets on Linux. It is
    2. equivalent to the command line argument "-dS". Data will then be copied
    3. using conventional and more portable recv/send calls. Kernel tcp splicing is
    4. limited to some very recent instances of kernel 2.6. Most versions between
    5. 2.6.25 and 2.6.28 are buggy and will forward corrupted data, so they must not
    6. be used. This option makes it easier to globally disable kernel splicing in
    7. case of doubt. See also "", "option splice-request" and
    8. "".

    nogetaddrinfo

    1. Disables the use of getaddrinfo(3) for name resolving. It is equivalent to
    2. the command line argument "-dG". Deprecated gethostbyname(3) will be used.

    1. Disables the use of SO_REUSEPORT - see socket(7). It is equivalent to the
    2. command line argument "-dR".

    profiling.tasks { auto | on | off }

    1. Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto'
    2. the profiling automatically turns on a thread when it starts to suffer from
    3. an average latency of 1000 microseconds or higher as reported in the
    4. "avg_loop_us" activity field, and automatically turns off when the latency
    5. returns below 990 microseconds (this value is an average over the last 1024
    6. loops so it does not vary quickly and tends to significantly smooth short
    7. spikes). It may also spontaneously trigger from time to time on overloaded
    8. systems, containers, or virtual machines, or when the system swaps (which
    9. must absolutely never happen on a load balancer).
    10.  
    11. CPU profiling per task can be very convenient to report where the time is
    12. spent and which requests have what effect on which other request. Enabling
    13. it will typically affect the overall's performance by less than 1%, thus it
    14. is recommended to leave it to the default 'auto' value so that it only
    15. operates when a problem is identified. This feature requires a system
    16. supporting the clock_gettime(2) syscall with clock identifiers
    17. CLOCK_MONOTONIC and CLOCK_THREAD_CPUTIME_ID, otherwise the reported time will
    18. be zero. This option may be changed at run time using "set profiling" on the
    19. CLI.

    <0..50, in percent>

    1. Sometimes it is desirable to avoid sending agent and health checks to
    2. servers at exact intervals, for instance when many logical servers are
    3. located on the same physical server. With the help of this parameter, it
    4. becomes possible to add some randomness in the check interval between 0
    5. and +/- 50%. A value between 2 and 5 seems to show good results. The
    6. default value remains at 0.

    ssl-engine [algo ]

    1. Sets the OpenSSL engine to <name>. List of valid values for <name> may be
    2. obtained using the command "openssl engine". This statement may be used
    3. multiple times, it will simply enable multiple crypto engines. Referencing an
    4. unsupported engine will prevent haproxy from starting. Note that many engines
    5. will lead to lower HTTPS performance than pure software with recent
    6. processors. The optional command "algo" sets the default algorithms an ENGINE
    7. will supply using the OPENSSL function ENGINE_set_default_string(). A value
    8. of "ALL" uses the engine for all cryptographic operations. If no list of
    9. of different algorithms may be specified, including: RSA, DSA, DH, EC, RAND,
    10. CIPHERS, DIGESTS, PKEY, PKEY_CRYPTO, PKEY_ASN1. This is the same format that
    11. openssl configuration file uses:
    12. https://www.openssl.org/docs/man1.0.2/apps/config.html

    tune.buffers.limit

    1. Sets a hard limit on the number of buffers which may be allocated per process.
    2. The default value is zero which means unlimited. The minimum non-zero value
    3. will always be greater than "" and should ideally always
    4. be about twice as large. Forcing this value can be particularly useful to
    5. limit the amount of memory a process may take, while retaining a sane
    6. behavior. When this limit is reached, sessions which need a buffer wait for
    7. another one to be released by another session. Since buffers are dynamically
    8. allocated and released, the waiting time is very short and not perceptible
    9. provided that limits remain reasonable. In fact sometimes reducing the limit
    10. may even increase performance by increasing the CPU cache's efficiency. Tests
    11. have shown good results on average HTTP traffic with a limit to 1/10 of the
    12. expected global maxconn setting, which also significantly reduces memory
    13. usage. The memory savings come from the fact that a number of connections
    14. will not allocate 2*tune.bufsize. It is best not to touch this value unless
    15. advised to do so by an haproxy core developer.
    1. Sets the number of buffers which are pre-allocated and reserved for use only
    2. during memory shortage conditions resulting in failed memory allocations. The
    3. minimum value is 2 and is also the default. There is no reason a user would
    4. want to change this value, it's mostly aimed at haproxy core developers.

    tune.bufsize

    1. Sets the buffer size to this size (in bytes). Lower values allow more
    2. sessions to coexist in the same amount of RAM, and higher values allow some
    3. applications with very large cookies to work. The default value is 16384 and
    4. can be changed at build time. It is strongly recommended not to change this
    5. from the default value, as very low values will break some services such as
    6. statistics, and values larger than default size will increase memory usage,
    7. possibly causing the system to run out of memory. At least the global maxconn
    8. parameter should be decreased by the same factor as this one is increased. In
    9. addition, use of HTTP/2 mandates that this value must be 16384 or more. If an
    10. HTTP request is larger than (tune.bufsize - tune.maxrewrite), haproxy will
    11. return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger
    12. than this size, haproxy will return HTTP 502 (Bad Gateway). Note that the
    13. value set using this parameter will automatically be rounded up to the next
    14. multiple of 8 on 32-bit machines and 16 on 64-bit machines.

    1. Sets the check buffer size to this size (in bytes). Higher values may help
    2. find string or regex patterns in very large pages, though doing so may imply
    3. more memory and CPU usage. The default value is 16384 and can be changed at
    4. build time. It is not recommended to change this value, but to use better
    5. checks whenever possible.

    tune.comp.maxlevel

    1. Sets the maximum compression level. The compression level affects CPU
    2. usage during compression. This value affects CPU usage during compression.
    3. Each session using compression initializes the compression algorithm with
    4. this value. The default value is 1.

    1. If compiled with DEBUG_FAIL_ALLOC, gives the percentage of chances an
    2. allocation attempt fails. Must be between 0 (no failure) and 100 (no
    3. success). This is useful to debug and make sure memory failures are handled
    4. gracefully.

    tune.h2.header-table-size

    1. Sets the HTTP/2 dynamic header table size. It defaults to 4096 bytes and
    2. cannot be larger than 65536 bytes. A larger value may help certain clients
    3. send more compact requests, depending on their capabilities. This amount of
    4. memory is consumed for each HTTP/2 connection. It is recommended not to
    5. change it.

    1. Sets the HTTP/2 initial window size, which is the number of bytes the client
    2. can upload before waiting for an acknowledgment from haproxy. This setting
    3. only affects payload contents (i.e. the body of POST requests), not headers.
    4. The default value is 65535, which roughly allows up to 5 Mbps of upload
    5. bandwidth per client over a network showing a 100 ms ping time, or 500 Mbps
    6. over a 1-ms local network. It can make sense to increase this value to allow
    7. faster uploads, or to reduce it to increase fairness when dealing with many
    8. clients. It doesn't affect resource usage.

    tune.h2.max-concurrent-streams

    1. Sets the HTTP/2 maximum number of concurrent streams per connection (ie the
    2. number of outstanding requests on a single connection). The default value is
    3. 100. A larger one may slightly improve page load time for complex sites when
    4. visited over high latency networks, but increases the amount of resources a
    5. single client may allocate. A value of zero disables the limit so a single
    6. client may create as many streams as allocatable by haproxy. It is highly
    7. recommended not to change this value.

    1. Sets the HTTP/2 maximum frame size that haproxy announces it is willing to
    2. receive to its peers. The default value is the largest between 16384 and the
    3. buffer size (tune.bufsize). In any case, haproxy will not announce support
    4. for frame sizes larger than buffers. The main purpose of this setting is to
    5. allow to limit the maximum frame size setting when using large buffers. Too
    6. large frame sizes might have performance impact or cause some peers to
    7. misbehave. It is highly recommended not to change this value.

    tune.http.cookielen

    1. Sets the maximum length of captured cookies. This is the maximum value that
    2. the "capture cookie xxx len yyy" will be allowed to take, and any upper value
    3. will automatically be truncated to this one. It is important not to set too
    4. high a value because all cookie captures still allocate this size whatever
    5. their configured value (they share a same pool). This value is per request
    6. per response, so the memory allocated is twice this value per connection.
    7. When not specified, the limit is set to 63 characters. It is recommended not
    8. to change this value.

    1. Sets the maximum length of request URI in logs. This prevents truncating long
    2. request URIs with valuable query strings in log lines. This is not related
    3. to syslog limits. If you increase this limit, you may also increase the
    4. 'log ... len yyy' parameter. Your syslog daemon may also need specific
    5. configuration directives too.
    6. The default value is 1024.

    tune.http.maxhdr

    1. Sets the maximum number of headers in a request. When a request comes with a
    2. number of headers greater than this value (including the first line), it is
    3. rejected with a "400 Bad Request" status code. Similarly, too large responses
    4. are blocked with "502 Bad Gateway". The default value is 101, which is enough
    5. for all usages, considering that the widely deployed Apache server uses the
    6. same limit. It can be useful to push this limit further to temporarily allow
    7. a buggy application to work by the time it gets fixed. The accepted range is
    8. 1..32767. Keep in mind that each new header consumes 32bits of memory for
    9. each session, so don't push this limit too high.

    1. Sets the duration after which haproxy will consider that an empty buffer is
    2. probably associated with an idle stream. This is used to optimally adjust
    3. some packet sizes while forwarding large and small data alternatively. The
    4. decision to use splice() or to send large buffers in SSL is modulated by this
    5. parameter. The value is in milliseconds between 0 and 65535. A value of zero
    6. means that haproxy will not try to detect idle streams. The default is 1000,
    7. which seems to correctly detect end user pauses (e.g. read a page before
    8. clicking). There should be no reason for changing this value. Please check
    9. tune.ssl.maxrecord below.

    tune.listener.multi-queue { on | off }

    1. Enables ('on') or disables ('off') the listener's multi-queue accept which
    2. spreads the incoming traffic to all threads a "
      • This keyword is available in sections :
      • Peers
      " line is allowed to run
    3. on instead of taking them for itself. This provides a smoother traffic
    4. distribution and scales much better, especially in environments where threads
    5. may be unevenly loaded due to external activity (network interrupts colliding
    6. with one thread for example). This option is enabled by default, but it may
    7. be forcefully disabled for troubleshooting or for situations where it is
    8. estimated that the operating system already provides a good enough
    9. distribution and connections are extremely short-lived.

    tune.lua.forced-yield

    1. This directive forces the Lua engine to execute a yield each <number> of
    2. instructions executed. This permits interrupting a long script and allows the
    3. HAProxy scheduler to process other tasks like accepting connections or
    4. forwarding traffic. The default value is 10000 instructions. If HAProxy often
    5. executes some Lua code but more responsiveness is required, this value can be
    6. lowered. If the Lua code is quite long and its result is absolutely required
    7. to process the data, the <number> can be increased.

    1. Sets the maximum amount of RAM in megabytes per process usable by Lua. By
    2. default it is zero which means unlimited. It is important to set a limit to
    3. ensure that a bug in a script will not result in the system running out of
    4. memory.

    tune.lua.session-timeout

    1. This is the execution timeout for the Lua sessions. This is useful for
    2. preventing infinite loops or spending too much time in Lua. This timeout
    3. counts only the pure Lua runtime. If the Lua does a sleep, the sleep is
    4. not taken in account. The default timeout is 4s.

    1. Purpose is the same as "tune.lua.session-timeout", but this timeout is
    2. dedicated to the tasks. By default, this timeout isn't set because a task may
    3. remain alive during of the lifetime of HAProxy. For example, a task used to
    4. check servers.

    1. This is the execution timeout for the Lua services. This is useful for
    2. preventing infinite loops or spending too much time in Lua. This timeout
    3. counts only the pure Lua runtime. If the Lua does a sleep, the sleep is
    4. not taken in account. The default timeout is 4s.

    tune.maxaccept

    1. Sets the maximum amount of events that can be processed at once in a call to
    2. the polling system. The default value is adapted to the operating system. It
    3. has been noticed that reducing it below 200 tends to slightly decrease
    4. latency at the expense of network bandwidth, and increasing it above 200

    tune.maxrewrite

    1. Sets the reserved buffer space to this size in bytes. The reserved space is
    2. used for header rewriting or appending. The first reads on sockets will never
    3. fill more than bufsize-maxrewrite. Historically it has defaulted to half of
    4. bufsize, though that does not make much sense since there are rarely large
    5. numbers of headers to add. Setting it too high prevents processing of large
    6. requests or responses. Setting it too low prevents addition of new headers
    7. to already large requests or to POST requests. It is generally wise to set it
    8. to about 1024. It is automatically readjusted to half of bufsize if it is
    9. larger than that. This means you don't have to worry about it when changing
    10. bufsize.

    1. Sets the size of the pattern lookup cache to <number> entries. This is an LRU
    2. cache which reminds previous lookups and their results. It is used by ACLs
    3. and maps on slow pattern lookups, namely the ones using the "sub", "reg",
    4. "dir", "dom", "end", "" match methods as well as the case-insensitive
    5. strings. It applies to pattern expressions which means that it will be able
    6. to memorize the result of a lookup among all the patterns specified on a
    7. configuration line (including all those loaded from files). It automatically
    8. invalidates entries which are updated using HTTP actions or on the CLI. The
    9. default cache size is set to 10000 entries, which limits its footprint to
    10. about 5 MB per process/thread on 32-bit systems and 8 MB per process/thread
    11. on 64-bit systems, as caches are thread/process local. There is a very low
    12. risk of collision in this cache, which is in the order of the size of the
    13. cache divided by 2^64. Typically, at 10000 requests per second with the
    14. default cache size of 10000 entries, there's 1% chance that a brute force
    15. attack could cause a single collision after 60 years, or 0.1% after 6 years.
    16. This is considered much lower than the risk of a memory corruption caused by
    17. aging components. If this is not acceptable, the cache can be disabled by
    18. setting this parameter to 0.
    1. Sets the kernel pipe buffer size to this size (in bytes). By default, pipes
    2. are the default size for the system. But sometimes when using TCP splicing,
    3. it can improve performance to increase pipe sizes, especially if it is
    4. suspected that pipes are not filled and that many calls to splice() are
    5. performed. This has an impact on the kernel's memory footprint, so this must
    6. not be changed if impacts are not understood.

    tune.pool-low-fd-ratio

    1. This setting sets the max number of file descriptors (in percentage) used by
    2. haproxy globally against the maximum number of file descriptors haproxy can
    3. use before we stop putting connection into the idle pool for reuse. The
    4. default is 20.

    1. This setting sets the max number of file descriptors (in percentage) used by
    2. haproxy globally against the maximum number of file descriptors haproxy can
    3. use before we start killing idle connections when we can't reuse a connection
    4. and we have to create a new one. The default is 25 (one quarter of the file
    5. descriptor will mean that roughly half of the maximum front connections can
    6. keep an idle connection behind, anything beyond this probably doesn't make
    7. much sense in the general case when targeting connection reuse).

    tune.rcvbuf.client

    1. Forces the kernel socket receive buffer size on the client or the server side
    2. to the specified value in bytes. This value applies to all TCP/HTTP frontends
    3. and backends. It should normally never be set, and the default size (0) lets
    4. the kernel auto-tune this value depending on the amount of available memory.
    5. However it can sometimes help to set it to very low values (e.g. 4096) in
    6. order to save kernel memory by preventing it from buffering too large amounts
    7. of received data. Lower values will significantly increase CPU usage though.

    tune.recv_enough

    1. HAProxy uses some hints to detect that a short read indicates the end of the
    2. socket buffers. One of them is that a read returns more than <recv_enough>
    3. bytes, which defaults to 10136 (7 segments of 1448 each). This default value
    4. may be changed by this setting to better deal with workloads involving lots
    5. of short messages such as telnet or SSH sessions.

    1. Sets the maximum amount of task that can be processed at once when running
    2. tasks. The default value is 200. Increasing it may incur latency when
    3. dealing with I/Os, making it too small can incur extra overhead.

    tune.sndbuf.client

    1. Forces the kernel socket send buffer size on the client or the server side to
    2. the specified value in bytes. This value applies to all TCP/HTTP frontends
    3. and backends. It should normally never be set, and the default size (0) lets
    4. the kernel auto-tune this value depending on the amount of available memory.
    5. However it can sometimes help to set it to very low values (e.g. 4096) in
    6. order to save kernel memory by preventing it from buffering too large amounts
    7. of received data. Lower values will significantly increase CPU usage though.
    8. Another use case is to prevent write timeouts with extremely slow clients due
    9. to the kernel waiting for a large part of the buffer to be read before
    10. notifying haproxy again.

    tune.ssl.cachesize

    1. Sets the size of the global SSL session cache, in a number of blocks. A block
    2. is large enough to contain an encoded session without peer certificate.
    3. An encoded session with peer certificate is stored in multiple blocks
    4. depending on the size of the peer certificate. A block uses approximately
    5. 200 bytes of memory. The default value may be forced at build time, otherwise
    6. defaults to 20000. When the cache is full, the most idle entries are purged
    7. and reassigned. Higher values reduce the occurrence of such a purge, hence
    8. the number of CPU-intensive SSL handshakes by ensuring that all users keep
    9. their session as long as possible. All entries are pre-allocated upon startup
    10. and are shared between all processes if "" is greater than 1. Setting
    11. this value to 0 disables the SSL session cache.

    tune.ssl.force-private-cache

    1. This option disables SSL session cache sharing between all processes. It
    2. should normally not be used since it will force many renegotiations due to
    3. clients hitting a random process. But it may be required on some operating
    4. systems where none of the SSL cache synchronization method may be used. In
    5. this case, adding a first layer of hash-based load balancing before the SSL
    6. layer might limit the impact of the lack of session sharing.

    1. Sets how long a cached SSL session may remain valid. This time is expressed
    2. in seconds and defaults to 300 (5 min). It is important to understand that it
    3. does not guarantee that sessions will last that long, because if the cache is
    4. full, the longest idle sessions will be purged despite their configured
    5. lifetime. The real usefulness of this setting is to prevent sessions from
    6. being used for too long.

    tune.ssl.maxrecord

    1. Sets the maximum amount of bytes passed to SSL_write() at a time. Default
    2. value 0 means there is no limit. Over SSL/TLS, the client can decipher the
    3. data only once it has received a full record. With large records, it means
    4. that clients might have to download up to 16kB of data before starting to
    5. process them. Limiting the value can improve page load times on browsers
    6. located over high latency or low bandwidth networks. It is suggested to find
    7. optimal values which fit into 1 or 2 TCP segments (generally 1448 bytes over
    8. Ethernet with TCP timestamps enabled, or 1460 when timestamps are disabled),
    9. keeping in mind that SSL/TLS add some overhead. Typical values of 1419 and
    10. 2859 gave good results during tests. Use "strace -e trace=write" to find the
    11. best value. HAProxy will automatically switch to this setting after an idle
    12. stream has been detected (see tune.idletimer above).

    1. Sets the maximum size of the Diffie-Hellman parameters used for generating
    2. the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The
    3. final size will try to match the size of the server's RSA (or DSA) key (e.g,
    4. a 2048 bits temporary DH key for a 2048 bits RSA key), but will not exceed
    5. this maximum value. Default value if 1024. Only 1024 or higher values are
    6. allowed. Higher values will increase the CPU load, and values greater than
    7. 1024 bits are not supported by Java 7 and earlier clients. This value is not
    8. used if static Diffie-Hellman parameters are supplied either directly
    9. in the certificate file or by using the ssl-dh-param-file parameter.

    tune.ssl.ssl-ctx-cache-size

    1. Sets the size of the cache used to store generated certificates to <number>
    2. entries. This is a LRU cache. Because generating a SSL certificate
    3. dynamically is expensive, they are cached. The default cache size is set to
    4. 1000 entries.

    1. Sets the maximum size of the buffer used for capturing client-hello cipher
    2. list. If the value is 0 (default value) the capture is disabled, otherwise
    3. a buffer is allocated for each SSL/TLS connection.

    tune.vars.global-max-size

    tune.vars.reqres-max-size

    tune.vars.txn-max-size

    1. These five tunes help to manage the maximum amount of memory used by the
    2. variables system. "global" limits the overall amount of memory available for
    3. all scopes. "" limits the memory for the process scope, "sess" limits the
    4. memory for the session scope, "txn" for the transaction scope, and "reqres"
    5. limits the memory for each request or response processing.
    6. Memory accounting is hierarchical, meaning more coarse grained limits include
    7. the finer grained ones: "proc" includes "sess", "sess" includes "txn", and
    8. "txn" includes "reqres".
    9.  
    10. For example, when "" is limited to 100,
    11. "tune.vars.txn-max-size" and "" cannot exceed
    12. 100 either. If we create a variable "txn.var" that contains 100 bytes,
    13. all available space is consumed.
    14. Notice that exceeding the limits at runtime will not result in an error
    15. message, but values might be cut off or corrupted. So make sure to accurately
    16. plan for the amount of space needed to store all your variables.

    tune.zlib.memlevel

    1. Sets the memLevel parameter in zlib initialization for each session. It
    2. defines how much memory should be allocated for the internal compression
    3. state. A value of 1 uses minimum memory but is slow and reduces compression
    4. ratio, a value of 9 uses maximum memory for optimal speed. Can be a value
    5. between 1 and 9. The default value is 8.

    1. Sets the window size (the size of the history buffer) as a parameter of the
    2. zlib initialization for each session. Larger values of this parameter result
    3. in better compression at the expense of memory usage. Can be a value between