This page discusses the core concepts and architecture behind TiKV, including:
- The that applications can use to interact with TiKV
- The basic system architecture underlying TiKV
- The role of core system components, including the , Store, , and Node
- TiKV’s
- The role of the Raft consensus algorithm in TiKV
- The of TiKV
TiKV provides two APIs that you can use to interact with it:
System architecture
The overall architecture of TiKV is illustrated in Figure 1 below:
Figure 1. The architecture of TiKV
TiKV instance
Figure 2. TiKV instance architecture
The TiKV placement driver is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically across nodes and regions in a process called auto-sharding.
Store
There is a database within each Store and it stores data into the local disk.
Region
Region is the basic unit of key-value data movement. Each Region is replicated to multiple Nodes. These multiple replicas form a Raft group.
When a Node starts, the metadata for the Node, Store, and Region is recorded into the Placement Driver. The status of each Region and Store is regularly reported to the PD.
Transaction model
TiKV’s transaction model is similar to that of Google’s Percolator, a system built for processing updates to large data sets. Percolator uses an incremental update model in place of a batch-based model.
TiKV’s transaction model provides:
- Snapshot isolation with lock, with semantics analogous to in SQL
Raft
Data is distributed across TiKV instances via the Raft consensus algorithm, which is based on the so-called (“In Search of an Understandable Consensus Algorithm”) from Diego Ongaro and .
TiKV was originally created by PingCAP to complement , a distributed HTAP) database compatible with the .