Administration Guide

System requirements

Platform support

Scylla runs on 64-bit Linux. The following operating system releases are supported:

Dist version
CentOS/RHEL 7.2 and above
Ubuntu 14.04
Ubuntu 16.04
Debian 8.6 and above

Physical hardware

Installation Cores Memory Disk Network
Test, minimal 4 2 GB Single plain SSD 1 Gbps
Production 20 cores - 2 socket, 10 cores each 128 GB RAID-0, 4 SSDs, 1-5TBs 10 Gbps
Analytics, heavy duty 28 cores - 2 socket, 14 cores each 256GB - 1TB NVMe, 10TB 10-56 Gbps

Scylla requires a fix to XFS append introduced in kernel 3.15 (back-ported to 3.10 in RHEL/CentOS). Scylla will not run with earlier kernel versions. Details in Scylla issue 885.

Hardware requirements and sizing

Scylla tries to maximize the resource usage of all system components. The shard-per-core approach allows linear scale-up with the number of cores. As you have more cores, it makes sense to balance the other resources, from memory to network.

  • Scylla is CPU intensive. Do not run additional CPU intensive tasks on the same server/cores as Scylla.

CPU

Scylla requires modern Intel CPUs that support the SSE4.2 instruction set and will not boot without it.

In terms of number of cores, any number will work since Scylla scales up with the number of cores. A practical approach is to use a large number of cores as long as the hardware price remains reasonable. 20-60 logical cores (including hyperthreading) is a good number, but any number fits. When using virtual machines, containers, or public cloud, remember that each virtual CPU is mapped to a single logical core, or hyperthread.

Memory requirements

The more memory you have, the better Scylla will perform, since Scylla can use all of it for caching. The wider your rows in your schema, the more memory you’ll need. 64GiB-256GiB is good range for a medium or high workload.

Disks

We highly recommend SSD and local disks. Scylla is built for a large volume of data and large storage per node. The rule of thumb is using 30:1 Disk/RAM ratio, for example, 30TB of storage requires 1TB of RAM. When there are multiple drives, we recommend a RAID-0 setup and a replication factor of 3 within the local datacenter (RF=3).

HDDs are supported but may become a bottleneck. Some workloads may work with HDDs, especially if they play nice and minimize random seeks. An example of an HDD-friendly workload is a write-mostly (98% writes) workload, with minimal random reads. If you use HDDs, try to allocate a separate disk for the commit log (not needed with SSDs).

Network

10Gbps is preferred, especially for large nodes. Make sure you run our setup scripts which tune the interrupts and their queues.

System configuration

See System Configuration Guide for details on optimum OS settings for Scylla. (These settings are performed automatically in the Scylla packages, Docker containers, and Amazon AMIs.)

Scylla Configuration

Scylla configuration files are:

Installed location Description
/etc/default/scylla-server (Ubuntu/Debian) | Server startup options /etc/sysconfig/scylla-server (others) |
/etc/scylla/scylla.yaml Main Scylla configuration file
/etc/scylla/cassandra-rackdc.properties Rack & dc configuration file

scylla-server

The scylla-server file contains configuration related to starting up the Scylla server.

scylla.yaml

scylla.yaml is equivalent to the Apache Cassandra cassandra.yaml configuration file, and it is compatible for relevant parameters. Below is a subset of scylla.yaml with parameters you are likely to update. For full list of parameters, look at the file itself.

# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'Test Cluster'

# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
# that this node will store. You probably want all nodes to have the same number
# of tokens assuming they have equal hardware capability.
#
# If you already have a cluster with 1 token per node, and wish to migrate to
# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
num_tokens: 256

# Directory where Scylla should store data on disk.
data_file_directories:
    - /var/lib/scylla/data

# commit log.  when running on magnetic HDD, this should be a
# separate spindle than the data directories.
commitlog_directory: /var/lib/scylla/commitlog

# seed_provider class_name is saved for future use.
# seeds address are mandatory!
seed_provider:
    # Addresses of hosts that are deemed contact points.
    # Scylla nodes use this list of hosts to find each other and learn
    # the topology of the ring.  You must change this if you are running
    # multiple nodes!
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          # seeds is actually a comma-delimited list of addresses.
          # Ex: "<ip1>,<ip2>,<ip3>"
          - seeds: "127.0.0.1"

# Address or interface to bind to and tell other Scylla nodes to connect to.
# You _must_ change this if you want multiple nodes to be able to communicate!
#
# Setting listen_address to 0.0.0.0 is always wrong.
listen_address: localhost

# Address to broadcast to other Scylla nodes
# Leaving this blank will set it to the same value as listen_address
# broadcast_address: 1.2.3.4

# port for the CQL native transport to listen for clients on
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
native_transport_port: 9042

# Uncomment to enable experimental features
# experimental: true

By default scylla.yaml is located at /etc/scylla/scylla.yaml

scylla.yaml required settings

The following configuration items must be set

Item Content
cluster_name Name of the cluster, all the nodes in the cluster must have the same name
seeds Seed nodes are used during startup to bootstrap the gossip process and join the cluster
listen_address IP address that the Scylla use to connect to other Scylla nodes in the cluster
rpc_address IP address of interface for client connections (Thrift, CQL)

internode_compression

internode_compression controls whether traffic between nodes is compressed.

  • all - all traffic is compressed.
  • dc - traffic between different datacenters is compressed.
  • none - nothing is compressed (default).

Configuring TLS/SSL in scylla.yaml

Scylla versions 1.1 and greater support encryption between nodes and between client and node. See the Scylla TLS/SSL Guide for configuration settings.

Networking

Scylla uses the following ports:

Port Description Protocol
9042 CQL (native_transport_port) TCP
7000 Inter-node communication (RPC) TCP
7001 SSL inter-node communication (RPC) TCP
7199 JMX management TCP
10000 Scylla REST API TCP
9180 Prometheus API TCP
9100 node_exporter (Optionally) TCP
9160 Scylla client port (Thrift) TCP

All ports above need to be open to external clients (CQL), external admin systems (JMX), and other nodes (RPC). REST API port can be kept closed for external incoming connections.

The JMX service, scylla-jmx, runs on port 7199. It is required in order to manage Scylla using nodetool and other Apache Cassandra-compatible utilities. The scylla-jmx process must be able to connect to port 10000 on localhost. The JMX service listens for incoming JMX connections on all network interfaces on the system.

Advanced networking

It is possible that a client, or another node, may need to use a different IP address to connect to a Scylla node from the address that the node is listening on. This is the case when a node is behind port forwarding. Scylla allows for setting alternate IP addresses.

Do not set any IP address to 0.0.0.0.

Address Content Default
listen_address IP address of interface for inter-node connections, as seen from localhost. No default (required)
broadcast_address IP address of interface for inter-node connections, as seen from other nodes in the cluster. listen_address
rpc_address IP address of interface for client connections, as seen from localhost No default (required)
broadcast_rpc_address IP address of interface for client connections, as seen from clients rpc_address

If other nodes can connect directly to listen_address, then broadcast_address does not need to be set.

If clients can connect directly to rpc_address, then broadcast_rpc_address does not need to be set.

Core dumps

On RHEL and CentOS, the Automatic Bug Reporting Tool (ABRT) conflict with Scylla coredump configuration. Remove it before installing Scylla: sudo yum remove -y abrt

Scylla places any core dumps in var/lib/scylla/coredump. They are not visible with the coredumpctl command. See the System Configuration Guide for details on core dump configuration scripts. Check with Scylla support before sharing any core dump, as they may contain sensitive data.

Schedule fstrim

Scylla sets up daily fstrim on the filesystem(s). Containing your Scylla commitlog and data directory. This utility will discard, or trim, any blocks no longer in use by the filesystem.

Monitoring

Scylla exposes interfaces for online monitoring, as described below.

Monitoring Interfaces

Scylla Monitoring Interfaces

Monitoring Stack

Scylla Monitoring Stack

JMX

Scylla JMX is compatible with Apache Cassandra, exposing the relevant subset of MBeans.

REST

For each JMX operation, attribute get and set, Scylla expose a matching REST API. You can interact with the REST API using curl or using the Swagger UI available at your-ip:10000/ui

Un-contents

Scylla is designed for high performance before tuning, for fewer layers that interact in unpredictable ways, and to use better algorithms that do not require manual tuning. The following items are found in the manuals for other data stores, but do not need to appear here.

Configuration un-contents

  • Generating tokens
  • Configuring virtual nodes

Operations un-contents

  • Tuning Bloom filters
  • Data caching
  • Configuring memtable throughput
  • Configuring compaction
  • Compression

Testing compaction and compression

  • Tuning Java resources
  • Purging gossip state on a node

Help with Scylla

Contact Support, or visit the Scylla Community page for peer support.

© 2016, The Apache Software Foundation.

Apache®, Apache Cassandra®, Cassandra®, the Apache feather logo and the Apache Cassandra® Eye logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by The Apache Software Foundation is implied by the use of these marks.