Configuring your driver for ArangoDB access

    The exact methods to configure a driver are specific to that driver.

    The endpoint(s) (or URLs) to communicate with is the most importantparameter your need to configure in your driver.

    Finding the right endpoints depend on wether your client application is running inthe same Kubernetes cluster as the ArangoDB deployment or not.

    If your client application is running in the same Kubernetes cluster asthe ArangoDB deployment, you should configure your driver to use thefollowing endpoint:

    Only if your deployment has set to None, shouldyou use http instead of https.

    Client application outside Kubernetes cluster

    If your client application is running outside the Kubernetes cluster in whichthe ArangoDB deployment is running, your driver endpoint depends on theexternal-access configuration of your ArangoDB deployment.

    If the external-access of the ArangoDB deployment is of type LoadBalancer,then use the IP address of that LoadBalancer like this:

      For example:

      You can find the type of external-access by inspecting the external-access Service.To do so, run the following command:

      1. kubectl get service -n <namespace-of-deployment> <deployment-name>-ea

      The output looks like this:

      In this case the external-access is of type LoadBalancer with a load-balancer IP addressof results in an endpoint of .

      As mentioned before the ArangoDB deployment managed by the ArangoDB operatorwill use a secure (TLS) connection unless you set spec.tls.caSecretName to Nonein your ArangoDeployment.

      When using a secure connection, you can choose to verify the server certificatesprovides by the ArangoDB servers or not.

      If you want to verify these certificates, configure your driver with the CA certificatefound in a Kubernetes found in the same namespace as the ArangoDeployment.

      Then fetch the CA secret using the following command (or use a Kubernetes client library to fetch it):

      1. kubectl get secret -n <namespace> <secret-name> --template='{{index .data "ca.crt"}}' | base64 -D > ca.crt

      This results in a file called ca.crt containing a PEM encoded, x509 CA certificate.

      For most client requests made by a driver, it does not matter if there is anykind of load-balancer between your client application and the ArangoDBdeployment.

      Note that even a simple Service of type ClusterIP already behaves as aload-balancer.

      The exception to this is cursor-related requests made to an ArangoDB deployment. The Coordinator that handles an initial query request (that resultsin a Cursor) will save some in-memory state in that Coordinator, if the resultof the query is too big to be transfer back in the response of the initialrequest.

      Follow-up requests have to be made to fetch the remaining data. These follow-uprequests must be handled by the same Coordinator to which the initial requestwas made. As soon as there is a load-balancer between your client applicationand the ArangoDB cluster, it is uncertain which Coordinator will receive thefollow-up request.

      ArangoDB will transparently forward any mismatched requests to the correctCoordinator, so the requests can be answered correctly without any additionalconfiguration. However, this incurs a small latency penalty due to the extrarequest across the internal network.

      If your client application is running outside the Kubernetes cluster the easiestway to work around it is by making sure that the query results are small enoughto be returned by a single request. When that is not feasible, it is alsopossible to resolve this when the internal DNS names of your Kubernetes clusterare exposed to your client application and the resulting IP addresses areroutable from your client application. To expose internal DNS names of yourKubernetes cluster, your can use CoreDNS.