While Kibana isn’t terribly resource intensive, we still recommend running Kibana separate from your Elasticsearch data or master nodes. To distribute Kibana traffic across the nodes in your Elasticsearch cluster, you can run Kibana and an Elasticsearch client node on the same machine. For more information, see .

    You can use Elastic Stack security features to control what Elasticsearch data users can access through Kibana.

    When security features are enabled, Kibana users have to log in. They need to have a role granting Kibana privileges as well as access to the indices they will be working with in Kibana.

    If a user loads a Kibana dashboard that accesses data in an index that they are not authorized to view, they get an error that indicates the index does not exist.

    For more information on granting access to Kibana, see .

    Require Content Security Policy

    Kibana uses a Content Security Policy to help prevent the browser from allowing unsafe scripting, but older browsers will silently ignore this policy. If your organization does not need to support Internet Explorer 11 or much older versions of our other supported browsers, we recommend that you enable Kibana’s mode for content security policy, which will block access to Kibana for any browser that does not enforce even a rudimentary set of CSP protections.

    To do this, set csp.strict to true in your kibana.yml:

    See .

    Load Balancing across multiple Elasticsearch nodes

    To use a local client node to load balance Kibana requests:

    1. Install Elasticsearch on the same machine as Kibana.
    2. Configure the node as a Coordinating only node. In elasticsearch.yml, set node.data, node.master and node.ingest to false:

      1. # 3. You want this node to be neither master nor data node nor ingest node, but
      2. # to act as a "search load balancer" (fetching data from nodes,
      3. # aggregating results, etc.)
      4. #
      5. node.master: false
      6. node.data: false
    3. Configure the client node to join your Elasticsearch cluster. In elasticsearch.yml, set the cluster.name to the name of your cluster.

      1. cluster.name: "my_cluster"
    4. Check your transport and HTTP host configs in elasticsearch.yml under network.host and transport.host. The transport.host needs to be on the network reachable to the cluster members, the network.host is the network for the HTTP connection for Kibana (localhost:9200 by default).

    To serve multiple Kibana installations behind a load balancer, you must change the configuration. See for details on each setting.

    Settings unique across each Kibana instance:

    1. server.uuid

    Settings unique across each host (for example, running multiple installations on the same virtual machine):

    1. xpack.security.encryptionKey //decrypting session cookies
    2. xpack.reporting.encryptionKey //decrypting reports
    3. xpack.encryptedSavedObjects.encryptionKey // decrypting saved objects

    Separate configuration files can be used from the command line by using the -c flag:

    1. bin/kibana -c config/instance1.yml
    2. bin/kibana -c config/instance2.yml

    High availability across multiple Elasticsearch nodes

    Kibana can be configured to connect to multiple Elasticsearch nodes in the same cluster. In situations where a node becomes unavailable, Kibana will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion.

    Currently the Console application is limited to connecting to the first node listed.

    In kibana.yml:

    Related configurations include elasticsearch.sniffInterval, elasticsearch.sniffOnStart, and elasticsearch.sniffOnConnectionFault. These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found on the .

    Kibana has a default maximum memory limit of 1.4 GB, and in most cases, we recommend leaving this unconfigured. In some scenarios, such as large reporting jobs, it may make sense to tweak limits to meet more specific requirements.

    You can modify this limit by setting --max-old-space-size in the node.options config file that can be found inside kibana/config folder or any other configured with the environment variable KBN_PATH_CONF (for example in debian based system would be /etc/kibana).

    The option accepts a limit in MB:

    Most Popular