Configure Scylla¶
System configuration steps are performed automatically by the Scylla RPM and deb packages. For information on getting started with Scylla, see Getting Started.
All Scylla AMIs and Docker images are pre-configured by a script with the following steps. This document is provided as a reference.
System Configuration Files and Scripts¶
Several system configuration settings should be applied. For ease of use, the necessary scripts and configuration files are provided. Files are under dist/common
and seastar/scripts
in the Scylla source code and installed in the appropriate system locations. (For information on Scylla’s own configuration file, see Scylla Configuration.)
System Configuration Files¶
Source file |
Installed location |
Description |
---|---|---|
limits.d/scylla.conf |
/etc/security/limits.d/scylla.conf |
Remove system resource limits |
sysconfig/scylla-server |
/etc/sysconfig/scylla-server |
Server startup options |
(written by
|
|
Configure core dumps to use the
|
Setup Scripts¶
The following scripts are documented for reference purposes. All of them are invoked by the scylla_setup
script, which should be run at the time of installation, or when the system hardware changes.
Source file |
Installed location |
Description |
---|---|---|
|
|
Set kernel options in bootloader |
|
|
Remove crash reporting software and set pattern for core dump names |
|
|
Configure Network Time Protocol |
|
|
Set up RAID and invoke network configuration |
|
|
Configure RAID and make XFS filesystem |
|
|
Wrapper to run Scylla with arguments from environment |
|
|
Compress a core dump file (Ubuntu only) |
|
|
Reset network mode if running in virtio or DPDK mode |
|
|
Rewrite the |
|
|
Set up networking options |
Note
It’s important to keep I/O scheduler configuration in sync on nodes with the same hardware. That’s why we recommend skipping running scylla_io_setup when provisioning a new node with exactly the same hardware setup as existing nodes in the cluster.
- Instead, we recommend to copy the following files from the existing node to the new node after running scylla_setup and restart scylla-server service (if it is already running):
/etc/scylla.d/io.conf
/etc/scylla.d/io_properties.yaml
Using different I/O scheduler configuration may result in unnecessary bottlenecks.
Bootloader Settings¶
If Scylla is installed on an Amazon AMI, the bootloader should provide the clocksource=tsc
and tsc=reliable
options. This enables an accurate, high-resolution Time Stamp Counter (TSC) for setting the system time.
This configuration is provided in the file /usr/lib/scylla/scylla_bootparam_setup
.
Remove Crash Reporting Software¶
Remove the apport-noui
or abrt
packages if present, and set up a location and file name pattern for core dumps.
This configuration is provided in the file /usr/lib/scylla/scylla_bootparam_setup
.
Set Up Network Time Synchronization¶
It is highly recommended to enforce time synchronization between Scylla servers.
Run ntpstat
on all nodes to check that system time is synchronized. If you are running in a virtualized environment and your system time is set on the host, you may not need to run NTP on the guest. Check the documentation for your platform.
If you have your own time servers shared with an application using Scylla, use the same NTP configuration as for your application servers. The script /usr/lib/scylla/scylla_ntp_setup
provides sensible defaults, using Amazon NTP servers if installed on the Amazon cloud, and other pool NTP servers otherwise.
Set Up RAID and Filesystem¶
Setting the file system to XFS is the most important and mandatory for production. Scylla will significantly slow down without it.
The script /usr/lib/scylla/scylla_raid_setup
performs the necessary RAID configuration and XFS filesystem creation for Scylla.
Arguments to the script are
-d
specify disks for RAID-r
MD device name for RAID-u
update /etc/fstab for RAID
On the Scylla AMI, the RAID configuration is handled automatically in the /usr/lib/scylla/scylla_prepare script
.
CPU Pining¶
When installing Scylla, it is highly recommended to use the scylla_setup script. Scylla should not share CPUs with any CPU consuming process. In addition, when running Scylla on AWS, we recommend pinning all NIC IRQs to CPU0 (due to the same reason). As a result, Scylla should be prevented from running on CPU0 and its hyper-threading siblings. To verify that Scylla is pinning CPU0, use the command below: If the node has four or fewer CPUs, don’t use this option.
To verify:
cat /etc/scylla.d/cpuset.conf
Example output:
--cpuset `1-15,17-31`
Networking¶
On AWS:¶
Prevent irqbalance from moving your NICs’ IRQs.
Bind all NICs’ HW queues to CPU0:
for irq in `cat /proc/interrupts | grep <networking iface name> | cut -d":" -f1`
do echo "Binding IRQ $irq to CPU0" echo 1 > /proc/irq/$irq/smp_affinity done
Enable RPS and bind RPS queues to CPUs other than CPU0 and its hyper-threading siblings.
Enable XPS and distribute all XPS queues among all available CPUs.
The posix_net_conf.sh script does all of the above.*
On Bare Metal Setups with Multi-Queue NICs¶
Prevent irqbalance from moving your NICs IRQs.
Bind each NIC’s IRQ to a separate CPU.
Enable XPS exactly the same way as for AWS above.
Set higher values for a listen() socket backlog and for unacknowledged pending connections backlog:
echo 4096 > /proc/sys/net/core/somaxconn
echo 4096 > /proc/sys/net/ipv4/tcp_max_syn_backlog
The posix_net_conf.sh script with the -mq
parameter does all of the above.
Configuring Scylla¶
Configuration for Scylla itself is in the Scylla Configuration section of the administration guide.
Development System Configuration¶
The following item is not required in production.
When working on DPDK support for Scylla, enable hugepages.
NR_HUGEPAGES=128
mount -t hugetlbfs -o pagesize=2097152 none /mnt/huge
mount -t hugetlbfs -o pagesize=2097152 none /dev/hugepages/
for n in /sys/devices/system/node/node?; do
echo $NR_HUGEPAGES > $n/hugepages/hugepages-2048kB/nr_hugepages;
done
Huge page configuration is written to /etc/sysconfig/scylla-server
by the script /usr/lib/scylla/sysconfig_setup