Install Scylla Monitoring Stack¶
This document describes the setup of Scylla Monitoring Stack, based on Scylla Prometheus API.
The Scylla Monitoring stack needs to be installed on a dedicated server, external to the Scylla cluster. Make sure the Scylla Monitoring server has access to the Scylla nodes so that it can pull the metrics over the Prometheus API.
For evaluation systems, you can run the Scylla Monitoring stack on any server (or laptop) that can handle three Docker instances at the same time. For production systems, see the recommendations below.
CPU - at least 2 physical cores/ 4vCPUs
Memory - 15GB+ DRAM
Disk - persistent disk storage is proportional to the number of cores and Prometheus retention period (see the following section)
Network - 1GbE/10GbE preferred
Prometheus storage disk performance requirements: persistent block volume, for example an EC2 EBS volume
Prometheus storage disk volume requirement: proportional to the number of metrics it holds. The default retention period is 15 days, and the disk requirement is around 200MB per core, assuming the default scraping interval of 15s.
For example, when monitoring a 6 node Scylla cluster, each with 16 CPU cores, and using the default 15 days retention time, you will need minimal disk space of
6 * 16 * 200MB ~ 20GB
To account for unexpected events, like replacing or adding nodes, we recommend allocating at least x4-5 space, in this case, ~100GB. Prometheus Storage disk does not have to be as fast as Scylla disk, and EC2 EBS, for example, is fast enough and provide HA out of the box.
Docker post installation guide can be found here
Avoid running the container as root.
To avoid running docker as root, you should add the user you are going to use for monitoring purposes to the Docker group.
Create a Docker group.
sudo groupadd docker
Add your user to the Docker group.
sudo usermod -aG docker $USER
Start Docker by calling:
sudo systemctl enable docker
Download and extract the latest Scylla Monitoring Stack binary; for example, for release 3.4.
wget https://github.com/scylladb/scylla-monitoring/archive/scylla-monitoring-3.0.tar.gz tar -xvf scylla-monitoring-3.0.tar.gz cd scylla-monitoring-scylla-monitoring-3.0
As an alternative, you can clone and use the Git repository directly.
git clone https://github.com/scylladb/scylla-monitoring.git cd scylla-monitoring git checkout branch-3.0
Start Docker service if needed
sudo systemctl restart docker
prometheus/scylla_servers.ymlwith the targets’ IPs (the servers you wish to monitor).
It is important that the name listed in
dc in the
labels matches the datacenter names used by Scylla.
nodetool status command to validate the datacenter names used by Scylla.
- targets: - 172.17.0.2 - 172.17.0.3 labels: cluster: cluster1 dc: dc1
If you want to add your managed cluster to Scylla Monitoring, add the IPs of the nodes as well as the cluster name you used when you added the cluster to Scylla Manager. It is important that the label
cluster name and the cluster name in the Scylla Manager match.
Add the IPv6 addresses with their square brackets and the port numbers.
- targets: - "[2600:1f18:26b1:3a00:fac8:118e:9199:67b9]:9180" - "[2600:1f18:26b1:3a00:fac8:118e:9199:67ba]:9180" labels: cluster: cluster1 dc: dc1
For IPv6 to work, both Scylla Prometheus address and node_exporter’s –web.listen-address should be set to listen to an IPv6 address.
For general node information (disk, network, etc.) Scylla Monitoring Stack uses the
node_exporter agent that runs on the same machine as Scylla does.
By default, Prometheus will assume you have a
node_exporter running on each machine. If this is not the case, you can override the
targets configuration file by creating an additional file and passing it with the
By default, there is no need to create
node_exporter_server.yml. Prometheus will use the same targets it uses for
Scylla, and assumes you have a
node_exporter running on each Scylla server.
If needed, you can set your own target file instead of the default
prometheus/scylla_servers.yml, using the
-s for Scylla target files.
./start-all.sh -s my_scylla_server.yml -d data_dir
Mark the different Data Centers with Labels.
As can be seen in the examples, each target has its own set of labels to mark the cluster name and the data center (dc). You can add multiple targets in the same file for multiple clusters or multiple data centers.
You can use the
genconfig.py script to generate the server file. For example:
./genconfig.py -d myconf -dc dc1:192.168.0.1,192.168.0.2 -dc dc2:192.168.0.3,192.168.0.4
This will generate a server file for four servers in two datacenters server
192.168.0.2 in dc1 and
192.168.0.4 in dc2.
genconfig.py script can also use
nodetool status to generate the server file using the
nodetool status | ./genconfig.py -NS
4. Connect to Scylla Manager by creating
If you are using the Scylla Manager, you should set its IP.
You must add a scylla_manager_servers.yml file even if you are not using the manager.
You can look at:
prometheus/scylla_manager_servers.example.yml for an example.
# List Scylla Manager end points - targets: - 172.17.0.7:56090
Note that you do not need to add labels to the Scylla Manager targets.
By default, start-all.sh will start with dashboards for the latest two Scylla versions and the latest Scylla Manager version.
You can specify specific Scylla version with the
-v flag and Scylla Manager version with
./start-all.sh -v 3.0,master -M 1.3 -d /promethes-data
will load the dashboards for Scylla versions
master and the dashboard for Scylla Manager
The Prometheus server runs inside a Docker container if it needs to reach a target on the local- host: either Scylla or Scylla-Manager. It needs to use the host network and not the Docker network. To do that run ./start-all.sh with the -l flag. For example:
./start-all.sh -l -d /promethes-data