System Requirements

Platform support

Scylla runs on 64-bit Linux. The following operating system releases are supported:

Linux distribution Version
CentOS/RHEL 7.2 and above
Ubuntu 14.04, 16.04
Ubuntu 18.04 *
Debian 8.6 and above (minor releases)
Debian 9.0 *

* Supported in Scylla 2.3

For a more detailed list with recommendations refer to Operating System (OS) Support Guide.

Scylla requires a fix to the XFS append introduced in kernel 3.15 (back-ported to 3.10 in RHEL/CentOS). Scylla will not run with earlier kernel versions. Details in Scylla issue 885.

Hardware Recommendations

  • Storage: For maximum performance SSD devices are highly recommended. When using more than one, use RAID0 to couple them together for the Scylla data directory.
  • Networking: 10Gbit/s cards.
  • Scylla is CPU intensive. Do not run additional CPU intensive tasks on the same server/cores as Scylla.

Hardware Requirements and Sizing

Scylla tries to maximize the resource usage of all system components. The shard-per-core approach allows linear scale-up with the number of cores. As you have more cores, it makes sense to balance the other resources, from memory to network.

CPU

Scylla requires modern Intel CPUs that support the SSE4.2 instruction set and will not boot without it.

In terms of the number of cores, any number will work since Scylla scales up with the number of cores. A practical approach is to use a large number of cores as long as the hardware price remains reasonable. Between 20-60 logical cores (including hyperthreading) is a recommended number. However any number will fit. When using virtual machines, containers, or the public cloud, remember that each virtual CPU is mapped to a single logical core, or hyperthread.

Memory Requirements

The more memory available, the better Scylla performs, as Scylla uses all of available memory for caching. The wider the rows are in the schema, the more memory will be required. 64GB-256GB is the recommended range for a medium to high workload. Memory requirements are calculated based on the number of logical cores you are using in your system. A logical core (lcore) is a hyperthreaded core on a hyperthreaded system, or a physical core on a system without hyperthreading.

  • Recommended size: 16 GB or 2GB per lcore (whichever is higher)
  • Maximum: 1 TiB per lcore, up to 256 lcores
  • Minimum:
    • For test environments: 1 GB or 256 MiB per lcore (whichever is higher)
    • For production environments: 4 GB or 0.5 GB per lcore (whichever is higher)

Disks

SSD

SSD and local disks are highly recommend. Scylla is built to handle up to 10TB per node. It is not rare to observe a rate of 1.5TB/s with Scylla per node. When there are multiple drives, a RAID-0 setup and a replication factor of 3 within the local datacenter (RF=3) is recommend.

HDD

HDDs are supported but not recommended. You may experience poor performance issues as some workloads can create a bottleneck. Workloads which work well minimize random seeks or a workload which mostly writes (98% writes) with minimal random reads. If you decide to use an HDD over the recommended SSD, allocate a separate disk for the commit log (this is not required with SSDs).

Network

Network speed of 10Gbps or more is recommended, especially for large nodes. To tune the interrupts and their queues, run the Scylla setup scripts.

Physical Hardware

Installation Cores Memory Disk Network
Test, minimal 4 2 GB Single plain SSD 1 Gbps
Production 20 cores - 2 socket, 10 cores each 128 GB RAID-0, 4 SSDs, 1-5TBs 10 Gbps
Analytics, heavy duty 28 cores - 2 socket, 14 cores each 256GB - 1TB NVMe, 10TB 10-56 Gbps

Cloud, AWS

i2 Instances

High I/O are highly recommended. This family includes the High Storage Instances that provide very fast SSD-backed instance storage. This is optimized for very high random I/O performance, and provides high IOPS at a low cost. Enhanced networking that exposes the physical network cards to the VM is recommended.

Model vCPU Mem (GB) Storage (GB)
i2.xlarge 4 30.5 1 x 800 SSD
i2.2xlarge 8 61 2 x 800 SSD
i2.4xlarge 16 122 4 x 800 SSD
i2.8xlarge 32 244 8 x 800 SSD

i3 Instances

Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, it can deliver up to 3.3 Million IOPS. i3 instance is great for low latency and high throughput, comapred to the i2 instances, the i3 instance provides storage that it’s less expensive and denser along with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

Model vCPU Mem (GB) Storage (NVMe SSD)
i3.large 2 5.25 0.475 TB
i3.xlarge 4 30.5 0.950 TB
i3.2xlarge 8 61 1.9 TB
i3.4xlarge 16 122 3.8 TB
i3.8xlarge 32 244 7.6 TB
i3.16xlarge 64 488 15.2 TB
i3.metal New in version 2.3 72* 512 8 x 1.9 NVMe SSD
  • i3.metal provides 72 logical processors on 36 physical cores

Cloud, GCE

Pick a zone where Haswell CPUs are found. Local SSD performance offers, according to Google, less than 1 ms of latency and up to 680,000 read IOPS and 360,000 write IOPS. The CentOS 7.x image with NVMe disk interface is recommended. (More info)

Model vCPU Mem (GB) Storage (GB)
n1-standard-8 8 30 eight 375 GB partitions for 3 TB
n1-standard-16 16 60 eight 375 GB partitions for 3 TB
n1-standard-32 32 120 eight 375 GB partitions for 3 TB
n1-himem-16 16 104 eight 375 GB partitions for 3 TB
n1-himem-32 32 208 eight 375 GB partitions for 3 TB

Getting Started