Single Instance vs. Cluster

    See for practical information.

    In a single server configuration all data is local and dead-locks caneasily be detected. In a cluster configuration data is distributed tomany servers and some conflicts cannot be detected easily. Thereforewe have to do some things (like locking shards) sequentially and in astrictly predefined order, to avoid dead-locks in this way by design.

    Document Keys

    In a cluster the autoincrement key generator is not supported. Youhave to use the traditional or user defined keys.

    Indexes

    There are restrictions on the allowed unique constraints in a cluster.Any unique constraint which cannot be checked locally on a per shardbasis is not allowed in a cluster setup. More concretely, uniqueconstraints in a cluster are only allowed in the following situations:

    • there is always a unique constraint on the primary key , ifthe collection is not sharded by _key, then _key must beautomatically generated by the database and cannot be prescribed bythe client
    • the collection has only one shard, in which case the same uniqueconstraints are allowed as in the single instance case
    • if the collection is sharded by exactly one other attribute than_key, then there can be a unique constraint on that attribute

    These restrictions are imposed, because otherwise checking for a uniqueconstraint violation would involve checking with all shards, which would havea considerable performance impact.

    It is not possible to rename collections or views in a cluster.

    AQL

    The keyword in AQL must be used to declare which collectionsare used in the AQL. For most AQL requires the required collectionscan be deduced from the query itself. However, with traversals this isnot possible, if edge collections are used directly. SeeAQL WITH operationfor details. The WITH statement is not necessary when using named graphsfor the traversals.

    As deadlocks cannot be detected in a cluster environment easily, theWITH keyword is mandatory for this particular situation in a cluster,but not in a single server.

    Performance of AQL queries can vary between single server and cluster.If a query can be distributed to many DB-Server and executed inparallel then cluster performance can be better. For example, if youdo a distributed COLLECT aggregation or a distributed operation.

    On the other hand, if you do a join or a traversal and the data is notlocal to one server then the performance can be worse compared to asingle server. This is especially true for traversal if the data isnot sharded with care. Our smart graph feature helps with this fortraversals.

    Single document operations can have a higher throughput in cluster butwill also have a higher latency, due to an additional network hop fromCoordinator to DB-Server.

    Any operation that needs to find documents by anything else but theshard key will have to fan out to all shards, so it will be a lotslower than when referring to the documents using the shardkey. Optimized lookups by shard key can only be used for equalitylookups, e.g. not for range lookups.

    Transactions

    Using a single instance of ArangoDB, multi-document / multi-collectionqueries are guaranteed to be fully ACID. This is more than many otherNoSQL database systems support. In cluster mode, single-documentoperations are also fully ACID. Multi-document / multi-collectionqueries in a cluster are not ACID, which is equally the case forcompeting database systems. See for details.

    Batch operations for multiple documents in the same collection are onlyfully transactional in a single instance.

    In smart graphs there are restrictions on the values of the _keyattributes. Essentially, the _key attribute values for vertices mustbe prefixed with the string value of the smart graph attribute and acolon. A similar restriction applies for the edges.

    Foxx

    Foxx apps run on the Coordinators of a cluster. Since Coordinators arestateless, one must not use regular file accesses in Foxx apps in acluster.

    Agency

    A cluster deployment needs a central, RAFT-based key/value store called“the Agency” to keep the current cluster configuration and managefailover. Being RAFT-based, this is a real-time system. If your serversrunning the Agency instances (typically three or five) receive too muchload, the RAFT protocol stops working and the whole stability of thecluster is endangered. If you foresee this problem, run the Agencyinstances on separate nodes. All this is not necessary in a singleserver deployment.

    In a cluster, the arangodump utility cannot guarantee a consistent snapshotacross multiple shards or even multiple collections. In a single server, produces a consistent snapshot.