System Requirements

Platform support

Scylla runs on 64-bit Linux. The following operating system releases are supported:

Dist version
CentOS/RHEL 7.2 and above
Ubuntu 14.04
Ubuntu 16.04
Debian 8.6 and above

Scylla requires a fix to the XFS append introduced in kernel 3.15 (back-ported to 3.10 in RHEL/CentOS). Scylla will not run with earlier kernel versions. Details in Scylla issue 885.

Hardware Requirements and Sizing

Scylla tries to maximize the resource usage of all system components. The shard-per-core approach allows linear scale-up with the number of cores. As you have more cores, it makes sense to balance the other resources, from memory to network.

CPU

Scylla requires modern Intel CPUs that support the SSE4.2 instruction set and will not boot without it.

In terms of the number of cores, any number will work since Scylla scales up with the number of cores. A practical approach is to use a large number of cores as long as the hardware price remains reasonable. Between 20-60 logical cores (including hyperthreading) is a recommended number. However any number will fit. When using virtual machines, containers, or the public cloud, remember that each virtual CPU is mapped to a single logical core, or hyperthread.

Memory Requirements

One logical core (lcore) is one hyperthreaded core on a hyperthreaded system, or one physical core on a system without hyperthreading.

  • Minimum: 256 MiB on 1 lcore (when run with --smp 1)
  • Recommend minimum for test environments: 1 GB or 256 MiB/lcore (whichever is higher)
  • Recommended minimum for production environments: 4 GB or 0.5 GB/lcore (whichever is higher)
  • Typical recommended for production: 16 GB or 2GB/lcore (whichever is higher)
  • Absolute maximum: 1 TiB / lcore, up to 256 lcores

The more memory available, the better Scylla will perform, since Scylla can use all of it for caching. The wider your rows in your schema, the more memory you’ll need. 64GB-256GB is the recommended range for a medium to high workload.

Disks

SSD and local disks are highly recommend. Scylla is built to handle up to 10TB per node. It is not rare to observe a rate of 1.5TB/s with Scylla per node. When there are multiple drives, a RAID-0 setup and a replication factor of 3 within the local datacenter (RF=3) is recommend.

HDDs are supported but may become a bottleneck. Some workloads may work with HDDs, especially if they play nice and minimize random seeks. An example of an HDD-friendly workload is a write-mostly (98% writes) workload, with minimal random reads. If HDDs are used, try to allocate a separate disk for the commit log (not needed with SSDs).

Network

10Gbps is preferred, especially for large nodes. Make sure Scylla’s setup scripts are run. These tune the interrupts and their queues.

Physical Hardware

Installation Cores Memory Disk Network
Test, minimal 4 2 GB Single plain SSD 1 Gbps
Production 20 cores - 2 socket, 10 cores each 128 GB RAID-0, 4 SSDs, 1-5TBs 10 Gbps
Analytics, heavy duty 28 cores - 2 socket, 14 cores each 256GB - 1TB NVMe, 10TB 10-56 Gbps

Cloud, AWS

i2 instances —High I/O are highly recommended. This family includes the High Storage Instances that provide very fast SSD-backed instance storage. This is optimized for very high random I/O performance, and provides high IOPS at a low cost. Enhanced networking that exposes the physical network cards to the VM is recommended.

Model vCPU Mem (GB) Storage (GB)
i2.xlarge 4 30.5 1 x 800 SSD
i2.2xlarge 8 61 2 x 800 SSD
i2.4xlarge 16 122 4 x 800 SSD
i2.8xlarge 32 244 8 x 800 SSD

i3 instances —Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, it can deliver up to 3.3 Million IOPS. i3 instance is great for low latency and high throughput, comapred to the i2 instances, the i3 instance provides storage that it’s less expensive and denser along with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

Will be supported starting from Scylla version 2.0

Model vCPU Mem (GB) Storage (NVMe SSD)
i3.large 2 5.25 0.475 TB
i3.xlarge 4 30.5 0.950 TB
i3.2xlarge 8 61 1.9 TB
i3.4xlarge 16 122 3.8 TB
i3.8xlarge 32 244 7.6 TB
i3.16xlarge 64 488 15.2 TB

Cloud, GCE

Pick a zone where Haswell CPUs are found. Local SSD performance offers, according to Google, less than 1 ms of latency and up to 680,000 read IOPS and 360,000 write IOPS. The CentOS 7.x image with NVMe disk interface is recommended. (More info)

Model vCPU Mem (GB) Storage (GB)
n1-standard-8 8 30 eight 375 GB partitions for 3 TB
n1-standard-16 16 60 eight 375 GB partitions for 3 TB
n1-standard-32 32 120 eight 375 GB partitions for 3 TB
n1-himem-16 16 104 eight 375 GB partitions for 3 TB
n1-himem-32 32 208 eight 375 GB partitions for 3 TB

Getting Started