Scylla Manager Setup

Scylla Manager is a centralized Scylla cluster administration and recurrent tasks automation element. The following document walks you through the Scylla Manager setup phase, assuming you completed the manager installation.

Before you begin

Verify that you have already read and followed the installation procedure and that your system meets the requirements described here.

Setup script

Run the scyllamgr_setup script to configure the service

sudo scyllamgr_setup

Setting up SSH connectivity

Scylla Manager uses SSH to securely access Scylla nodes API. This guide will show you how to setup an SSH connection from scylla-manager to Scylla nodes.

Use your own SSH key

If you don’t have your own SSH key, Generate a new SSH key pair, if you do have an SSH key with access to the cluster, you have two alternatives:

  • Copy your SSH key to /var/lib/scylla-manager/scylla_manager.pem, and make sure it’s owned by the user scylla-manager and the permissions are set to 0400.
  • Use your SSH key as is, and
    • Update /etc/scylla-manager/scylla-manager.yaml “SSH” section, setting identity_file to your SSH private key, and user to a user with access Scylla nodes.
    • Make sure the user scylla-manager is the file (private key) owner and the file permissions are 0400.

Example:

$ sudo ls -l /var/lib/scylla-manager/scylla_manager.pem
-r--------. 1 scylla-manager root 1675 Jan 21 13:07 /var/lib/scylla-manager/scylla_manager.pem
$ sudo tail -5 /etc/scylla-manager/scylla-manager.yaml
# SSH is used to access scylla nodes. User private key must be PEM encoded and
# stored in the identity_file.
ssh:
  user: scylla-manager
  identity_file: /var/lib/scylla-manager/scylla_manager.pem

Generate a new SSH key pair

Follow the steps below to generate a new key:

#Run on Scylla Manager
sudo ssh-keygen -t rsa -b 2048 -N "" -f /var/lib/scylla-manager/scylla_manager.pem
sudo chmod 0400 /var/lib/scylla-manager/scylla_manager.pem
sudo chown scylla-manager /var/lib/scylla-manager/scylla_manager.pem
sudo ls -l /var/lib/scylla-manager/scylla_manager.pem
-r--------. 1 scylla-manager root 1675 Jan 21 13:07 /var/lib/scylla-manager/scylla_manager.pem

Run the service

Run Scylla Manager service (if not already running)

sudo systemctl start scylla-manager

Run sctool

Verify the Scylla Manager service is running

> sctool version
Client version: 1.0.0_0.20180119.49f4a33
Server version: 1.0.0_0.20180119.49f4a33

Setting SSH connectivity to Scylla nodes

This section is a continuation of Generate a new SSH key pair section and explain how to distribute the public key we generated earlier to the cluster. If you used your key and it’s already distributed on the scylla cluster, you can skip this section.

Install the key on each Scylla node

If you do not have SSH access from the scylla-manager server to the Scylla noded, you may need to copy /var/lib/scylla-manager/.ssh/manager.pem.pub to a machine from which you do have access from to all Scylla nodes. We will call that machine “Anchor”.

Copy manager_rsa.pem.pub to a temporary directory and copy it to the anchor machine:

#Run on Scylla Manager
sudo cp /var/lib/scylla-manager/scylla_manager.pem.pub /tmp
#For example you may copy the scylla_manager.pem.pub to the anchor machine
sudo scp -i key.pem centos@SCYLLA-MANAGER-HOST:/tmp/scylla_manager.pem.pub .

From the anchor machine, given that you have SSH access to a sudo enabled user USER on an each Scylla node HOST, create an SSH_CMD variable as follow:

# Run on Anchor
SSH_CMD='ssh USER@HOST'
# For AWS it may look like
# SSH_CMD='ssh -i "key.pem" centos@ec2-18-217-79-221.us-east-2.compute.amazonaws.com'

Create a remote scylla-manager user and install the public key on each Scylla nodes:

# Run on Anchor
${SSH_CMD} 'sudo useradd -m scylla-manager'
cat scylla_manager.pem.pub | ${SSH_CMD} 'sudo -u scylla-manager sh -c "cd && mkdir -p ~/.ssh && cat > ~/.ssh/authorized_keys"'

Test connectivity:

#Run on Scylla Manager
sudo ssh -i /var/lib/scylla-manager/scylla_manager.pem scylla-manager@HOST whoami scylla-manager

Adding an existing Scylla cluster

To optimize and parallelize repair operations, Scylla Manager needs to use the number of shards (cores) used by Scylla nodes. To find out the number of shards, use the following on a Scylla node: scyllatop -l | grep gauge-utilization | wc -l

Add a new cluster to the management

sctool cluster add --hosts <scylla-nodes> -n <cluster-name> --shard-count <shard-count>

Where scylla-nodes is a subset of Scylla node IPs; cluster-name is a unique name you will use to manage the cluster, and shard-count is the value you extracted from a Scylla node above.

Validate the Manager is connected to the cluster

After adding a new cluster you can see which repair units were created

sctool repair unit list -c <cluster-name>