Watch Memory Usage Benchmark

    NOTE: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.

    A primary goal of etcd is supporting a very large number of watchers doing a massively large amount of watching. etcd aims to support O(10k) clients, O(100K) watch streams (O(10) streams per client) and O(10M) total watchings (O(100) watching per stream). The memory consumed by each individual watching accounts for the largest portion of etcd’s overall usage, and is therefore the focus of current and future optimizations.

    Three related components of etcd watch consume physical memory: each , each watch stream, and each instance of the watching activity. grpc.Conn maintains the actual TCP connection and other gRPC connection state. Each consumes O(10kb) of memory, and might have multiple watch streams attached.

    Watching is the actual struct that tracks the changes on the key-value store. Each watching should only consume < O(1kb).

    The theoretical memory consumption of watch can be approximated with the formula: memory = c1 * number_of_conn + c2 * avg_number_of_stream_per_conn + c3 * avg_number_of_watch_stream

    etcd version

    • git head
    • 2x CPUs

    Overall memory usage

    The overall memory usage captures how much RSS etcd consumes with the client watchers. While the result may vary by as much as 10%, it is still meaningful, since the goal is to learn about the rough memory usage and the pattern of allocations.

    With the benchmark result, we can calculate roughly that , c2 = 18kb and . So each additional client connection consumes 17kb of memory and each additional stream consumes 18kb of memory, and each additional watching only cause 350bytes. A single etcd server can maintain millions of watchings with a few GB of memory in normal case.