A state where data is in order and organized. Scylla has processes in place to make sure that data is antientropic where all replicas contain the most recent data and that data is consistent between replicas. See Scylla Anti-Entropy.


When a new node is added to a cluster, the bootstrap process ensures that the data in cluster to be automatically redistributed to the new node. A new node in this case is an empty node without system tables or data. See bootstrap.

CAP Theorem

The CAP Theorem is the notion that C (Consistency), A (Availability) and P (Partition Tolerance) of data are mutually dependent in a distributed system. Increasing any 2 of these factors will reduce the third. Scylla chooses availability and partition tolerance over consistency. See Fault Tolerance.


One or multiple Scylla nodes, acting in concert, which own a single contiguous token range. State is communicated between nodes in the cluster via the Gossip protocol. See Ring Architecture.

Clustering Key

A single or multi-column clustering key determines a row’s uniqueness and sort order on disk within a partition. See Ring Architecture.

Column Family

See table.


The process of reading several SSTables, comparing the data and time stamps and then writing one SSTable containing the merged, most recent, information. See Compaction Strategies.

Compaction Strategy

Determines which of the SSTables will be compacted, and when. See Compaction Strategies.

Consistency Level (CL)

A a dynamic value which dictates the number of replicas (in a cluster) that must acknowledge a read or write operation. This value is set by the client on a per operation basis. See Fault Tolerance.

Consistency Level: All

A write must be written to all replicas in the cluster, a read waits for a response from all replicas. Provides the lowest availability with the highest consistency. See Fault Tolerance.

Consistency Level: Any

A write must be written to at least one replica in the cluster. A read waits for a response from at least one replica. Provides the highest availability with the lowest consistency. See Fault Tolerance.

Note that for cases where hinted handoff is enabled, success would be returned if at least one hint was successfully stored in the coordinator even if all the replicas are down. See Hinted Handoff

Consistency Level: Each_quorum

Supported only for writes where a quorum of replicas in ALL datacenters must be written to. See Fault Tolerance.

Consistency Level: Local_one

At least one replica in the local data center responds. See Fault Tolerance.

Consistency Level: Local_quorum

A quorum of replicas in the local datacenter responds. See Fault Tolerance.

Consistency Level: One

Only one replica needs to respond in the cluster. See Fault Tolerance.

Consistency Level: Quorum

Quorum is a global consistency level setting across the entire cluster including all data centers. When using Quorum as the consistency level, the coordinator must wait for a majority of nodes to acknowledge before the request is honored. If RF=3, then at least 2 replicas must respond. QUORUM can be calculated using the formula (n/2 +1) where n is the Replication Factor. If you have two data centers, all nodes in both datacenters count towards the quorum majority. For example, there is a cluster with two DCs with three nodes in one DC and two nodes in the other. If the smaller DC fails, requests will still pass under Quorum as 3 > 5/2. See Fault Tolerance.

Date-tiered compaction strategy (DTCS)

DTCS is designed for time series data, but should not be used. Use Time-Window Compaction Strategy. See Compaction Strategies.


A state where data is not consistent. This is the result when replicas are not synced and data is random. Scylla has measures in place to be antientropic. See Scylla Anti-Entropy.

Eventual Consistency

In Scylla, when considering the CAP Theorem, availability and partition tolerance are considered a higher priority than consistency.


A short record of a write request that is held by the co-ordinator until the unresponsive node becomes responsive again, at which point the write request data in the hint is written to the replica node. See Hinted Handoff.

Hinted Handoff

Reduces data inconsistency which can occur when a node is down or there is network congestion. In Scylla, when data is written and there is an unresponsive replica, the coordinator writes itself a hint. When the node recovers, the coordinator sends the node the pending hints to ensure that it has the data it should have received. See Hinted Handoff.


Denoting an element of a set which is unchanged in value when multiplied or otherwise operated on by itself. Scylla Counters are not indepotent because in the case of a write failure, the client cannot safely retry the request.


JBOD or Just another Bunch Of Disks is a non-raid storage system using a server with multiple disks in order to instantiate a separate file systems per disk. The benefit is that if a single disk fails, only it needs to be replaced and not the whole disk array. The disadvantage is that free space and load may not be evenly distributed. See the FAQ.

Key Management Interoperability Protocol (KMIP)

KMIP is a communication protocol that defines message formats for storing keys on a key management server (KMIP server). You can use a KMIP server to protect your keys when using Encryption at Rest. See Encryption at Rest.


A collection of tables with attributes which define how data is replicated on nodes. See Ring Architecture.

Leveled compaction strategy (LCS)

LCS uses small, fixed-size (by default 160 MB) SSTables divided into different levels. See Compaction Strategies.

Log-structured-merge (LSM)

A technique of keeping sorted files and merging them. LSM is a data structure that maintains key-value pairs. See Compaction

Logical Core (lcore)

A hyperthreaded core on a hyperthreaded system, or a physical core on a system without hyperthreading.


An in-memory data structure servicing both reads and writes. Once full, the Memtable flushes to an SSTable. See Compaction Strategies.


A change to data such as column or columns to insert, or a deletion. See Hinted Handoff.


A single installed instance of Scylla. See Ring Architecture.


A simple command-line interface or administering a Scylla node. A nodetool command can display a given node’s exposed operations and attributes. Scylla’s nodetool contains a subset of these operations. See Ring Architecture.


A subset of data that is stored on a node and replicated across nodes. There are two ways to consider a partition. In CQL, a partition appears as a group of sorted rows, and is the unit of access for queried data, given that most queries access a single partition. On the physical layer, a partition is a unit of data stored on a node and is identified by a partition key. See Ring Architecture.

Partition Key

The unique identifier for a partition, a partition key may be hashed from the first column in the primary key. A partition key may also be hashed from a set of columns, often referred to as a compound primary key. A partition key determines which virtual node gets the first partition replica. See Ring Architecture.


A hash function for computing which data is stored on which node in the cluster. The partitioner takes a partition key as an input, and returns a ring token as an output. By default Scylla uses the 64 bit Murmurhash3 function and this hash range is numerically represented as an unsigned 64bit integer, see Ring Architecture.

Primary Key

In a CQL table definition, the primary key clause specifies the partition key and optional clustering key. These keys uniquely identify each partition and row within a partition. See Ring Architecture.

Quorum (Consistency Level)


Read Amplification

Excessive read requests which require many SSTables. RA is calculated by the number of disk reads per query. High RA occurs when there are many pages to read in order to answer a query. See Compaction Strategies.

Read Operation

A read operation occurs when an application gets information from an SSTable and does not change that information in any way. See Fault Tolerance.

Read Repair

An anti-entropy mechanism for read operations ensuring that replicas are updated with most recently updated data. These repairs run automatically, asynchronously, and in the background. See Scylla Read Repair.


A verification phase during a data migration where the target data is compared against original source data to ensure that the migration architecture has transferred the data correctly. See Scylla Read Repair.


A process which runs in the background and synchronizes the data between nodes, so that eventually, all the replicas hold the same data. See Scylla Repair.


The process of replicating data across nodes in a cluster. See Fault Tolerance.

Replication Factor

The total number of replica nodes across a given cluster. An RF of 1 means that the data will only exist on a single node in the cluster and will not have any fault tolerance. This number is a setting defined for each keyspace. All replicas share equal priority; there are no primary or master replicas. An RF can be defined on for each DC. See Fault Tolerance.

Replication Factor (RF)

The total number of replica nodes across a given cluster. An RF of 1 means that the data will only exist on a single node in the cluster and will not have any fault tolerance. This number is a setting defined for each keyspace. All replicas share equal priority; there are no primary or master replicas. An RF can be defined on for each DC. See Fault Tolerance.

Size-tiered compaction strategy

Triggers when the system has enough (four by default) similarly sized SSTables. See Compaction Strategies.


Snapshots in Scylla are an essential part of the backup and restore mechanism. Whereas in other databases a backup starts with creating a copy of a data file (cold backup, hot backup, shadow copy backup), in Scylla the process starts with creating a table or keyspace snapshot. See Scylla Snapshots.

Space amplification

Excessive disk space usage which requires that the disk be larger than a perfectly-compacted representation of the data (i.e., all the data in one single SSTable). SA is calculated as the ratio of the size of database files on a disk to the actual data size. High SA occurs when there is more disk space being used than the size of the data. See Compaction Strategies.


A concept borrowed from Google Big Table, SSTables or Sorted String Tables store a series of immutable rows where each row is identified by its row key. See Compaction Strategies. The SSTable format is a persistent file format. See Scylla SSTable Format.


A collection of columns fetched by row. Columns are ordered by Clustering Key. See Ring Architecture.

Time-window compaction strategy

TWCS is designed for time series data and replaced Date-tiered compaction. See Compaction Strategies.


A value in a range, used to identify both nodes and partitions. Each node in a Scylla cluster is given an (initial) token, which defines the end of the range a node handles. See Ring Architecture.

Token Range

The total range of potential unique identifiers supported by the partitioner. By default, each Scylla node in the cluster handles 256 token ranges. Each token range corresponds to a Vnode. Each range of hashes in turn is a segment of the total range of a given hash function. See Ring Architecture.

Tunable Consistency

The possibility for unique, per-query, Consistency Level settings. These are incremental and override fixed database settings intended to enforce data consistency. Such settings may be set directly from a CQL statement when response speed for a given query or operation is more important. See Fault Tolerance.

Virtual node

A range of tokens owned by a single Scylla node. Scylla nodes are configurable and support a set of Vnodes. In legacy token selection, a node owns one token (or token range) per node. With Vnodes, a node can own many tokens or token ranges; within a cluster, these may be selected randomly from a non-contiguous set. In a Vnode configuration, each token falls within a specific token range which in turn is represented as a Vnode. Each Vnode is then allocated to a physical node in the cluster. See Ring Architecture.

Write Amplification

Excessive compaction of the same data. WA is calculated by the ratio of bytes written to storage versus bytes written to the database. High WA occurs when there are more bytes/second written to storage than are actually written to the database. See Compaction Strategies.

Write Operation

A write operation occurs when information is added or removed from an SSTable. See Fault Tolerance.