ScyllaDB University LIVE, FREE Virtual Training Event | March 21
Register for Free
ScyllaDB Documentation Logo Documentation
  • Server
    • ScyllaDB Open Source
    • ScyllaDB Enterprise
    • ScyllaDB Alternator
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
Download
Menu
ScyllaDB Docs ScyllaDB Open Source Scylla for Administrators System Configuration Scylla Snitches

Scylla Snitches¶

Snitches are used in the following ways:

  • To determine to which datacenters and racks the Scylla nodes belong to

  • To inform Scylla about the network topology so that requests are routed efficiently

  • To allow Scylla to distribute replicas by grouping machines into data centers and racks.

Note, that if you do not choose a Snitch when creating a Scylla cluster, the SimpleSnitch is selected by default.

Scylla supports the following snitches:

  • SimpleSnitch

  • RackInferringSnitch

  • GossipingPropertyFileSnitch

  • Ec2Snitch

  • Ec2MultiRegionSnitch

  • GoogleCloudSnitch

Note

For production clusters, it is strongly recommended to use GossipingPropertyFileSnitch or Ec2MultiRegionSnitch. Other snitches are limited and will make it harder for you to add a Data Center (DC) later.

Warning

Do not disable access to instance metadata if you’re using Ec2Snitch or Ec2MultiRegionSnitch. With access to medatata disabled, the information about datacenter names and racks may be missing or incorrect, or the instance may fail to boot.

SimpleSnitch¶

Use the SimpleSnitch when working with single cluster deployments and all the nodes are under the same datacenter The SimpleSnitch binds all the nodes to the same Rack and datacenter and is recommended to be used only in single datacenter deployments.

RackInferringSnitch¶

RackInferringSnitch binds nodes to DCs and racks according to their broadcast IP addresses.

For Example:

If a node has a Broadcast IP 192.168.100.200; then it would belong to a DC ‘168’ and Rack ‘100’.

GossipingPropertyFileSnitch¶

Use the GossipingPropertyFileSnitch when working with multi-cluster deployments where the nodes are in various datacenters. It is recommended to use the GossipingPropertyFileSnitch in production installations. This snitch allows Scylla to explicitly define which DC and rack a specific node belongs to. In addition, it reads its configuration from the cassandra-rackdc.properties file, which is located in the /etc/scylla/ directory.

For Example:

prefer_local=true
dc=my_data_center
rack=my_rack

Setting prefer_local to true instructs Scylla to use an internal IP address for interactions with nodes in the same DC.

An example use case is when your host uses different addresses for LAN and WAN sessions. You want your cluster to be accessible by clients outside the Scylla nodes’ LAN while still allowing Scylla nodes to communicate over internal LAN keeping latency low. In AWS, this is similar to a VM’s “Public” and “Private” addresses. To set an internal and external address, set a LAN address as a listen_address and use a WAN address as a broadcast_address.

If you set prefer_local: true nodes in the same DC would use their LAN addresses to communicate with each other and WAN addresses to access nodes in different DCs and communicate with Clients.

Ec2Snitch¶

Use the Ec2Snitch when working on EC2 with a single cluster deployments where all nodes are located in the same region. This basic snitch reads its configuration from Amazon’s EC2 registry services. When using EC2, the region name is treated as the datacenter name and availability zones are treated as racks within a datacenter. If the setup includes a single datacenter, there is no need to specify any parameters. As private IPs are used, this snitch does not work well across multiple regions. It should also be noted that according to this snitch, a DC is a region and if a region is down, the entire cluster will be down.

If you are working with multiple datacenters, specify the DC and set the parameter dc_suffix=<DCNAME> in the cassandra-rackdc.properties file, which is located in the /etc/scylla/ directory.

For Example, suppose you had created a 5 node cluster and added the following configuration settings to each node’s /etc/scylla/cassandra-rackdc.properties file as shown:

Node number

Parameter to add to the specific node’s /etc/scylla/cassandra-rackdc.properties

1

dc_suffix=_dc1-europe

2

dc_suffix=_dc1-europe

3

dc_suffix=_dc2-asia

4

dc_suffix=_dc2-asia

5

dc_suffix=_dc3-australia

This action adds a suffix to the name of each of the datacenters for the region.

Running the nodetool status command shows all three datacenters:

Datacenter: us-east_dc1-europe
==============================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns    Host ID                               Rack
UN  172.20.0.4  111.23 KB  256          ?       eaabc5db-61ff-419b-b1a7-f70af23edb1b  Rack1
UN  172.20.0.5  127.09 KB  256          ?       bace1b4e-67c6-4bdb-8eba-398162b7b56e  Rack1
Datacenter: us-east_dc2-asia
============================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns    Host ID                               Rack
UN  172.20.0.6  110.59 KB  256          ?       bda5fb11-9369-48fb-91be-82c8d821f758  Rack1
UN  172.20.0.3  111.26 KB  256          ?       b9ea3516-5e1e-4ffb-abff-c6a6701cb41b  Rack1
Datacenter: us-east_dc3-australia
=================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns    Host ID                               Rack
UN  172.20.0.7  111.23 KB  256          ?       eaabc5db-61ff-419b-b1a7-f70af23edb1b  Rack1

Note

The datacenter naming convention in this example is based on location. You can use other conventions, such as DC1, DC2 or 100, 200, or analytics, search, Scylla, and more. Providing a separator such as a dash keeps the name of the DC readable as the dc_suffix property adds the suffix to the DC name.

Note

Ec2Snitch and Ec2MultiRegionSnitch will define DC/RACK differently for AWS Availability Zones (AZs) that end with 1x compared to other AZs:

  • For the former class of AZs, e.g. us-east-1d, the Snitch will set DC='us-east', RACK='1d'

  • For the latter class of AZs, e.g. us-east-4c, the Snitch will set DC='us-east-4', RACK='4c'

Ec2MultiRegionSnitch¶

Use the Ec2MultiRegionSnitch when working on EC2 and using multi-cluster deployments where the nodes are in various regions. This snitch works like the Ec2Snitch, but in addition, it sets the node’s broadcast_address and broadcast_rpc_address to the node’s public IP address. This setting allows nodes from other zones to communicate with the node regardless of what is configured in the node’s scylla.yaml configuration file for broadcast_address and broadcast_rpc_address parameters.

Ec2MultiRegionSnitch also unconditionally imposes the “prefer local” policy on a node (similar to GossipingPropertyFileSnitch when prefer_local is set to true).

In EC2, the region name is treated as the datacenter name and availability zones are treated as racks within a datacenter.

To change the DC and rack names, do the following:

Edit the cassandra-rackdc.properties file with the preferred datacenter name. The file can be found under /etc/scylla/ The dc_suffix defines a suffix added to the datacenter name as described below.

For Example:

Node - region DC='us-west' and Rack Rack='1' will be us-west-1 dc_suffix= scylla_node_west

Node - region DC='us-east' and Rack Rack='2' will be us-east-2 dc_suffix= scylla_node_east

us-west-1_scylla_node_west
us-east-2_scylla_node_east

GoogleCloudSnitch¶

Use the GoogleCloudSnitch for deploying Scylla on the Google Cloud Engine (GCE) platform across one or more regions. The region is treated as a datacenter, and the availability zones are treated as racks within the datacenter. All communication occurs over private IP addresses within the same logical network.

To use the GoogleCloudSnitch, add the snitch name to the scylla.yaml file, which is located in the /etc/scylla/ directory for all nodes in the cluster.

You can add a suffix to the data center name as an additional identifier. This suffix is appended to the Zone name without adding any spaces. To add this suffix edit the cassandra-rackdc.properties file, which can be found under /etc/scylla/ and set the dc_suffix with an appropriate text string. It may help to add an underscore or dash in front. Keep in mind that this property file is used for all Scylla snitches. When using GoogleCloudSnitch, all other properties are ignored.

Example

You have two datacenters running on GCE. One is for the office in Miami and is in region us-east1, zone us-east-1-b. The other office is in Portland and is in region us-west1,, zone us-west-1-b.

It’s important to note that:

  • DC1 is us-east1 with rack name b

  • DC2 is us-west1 with rack b

Racks are important for distributing replicas, but not for datacenter naming as this Snitch can work across multiple regions without additional configuration.

After creating the instances on GCE, edit the scylla.yaml file to select the GoogleCloudSnitch.

endpoint_snitch: GoogleCloudSnitch

As you want to set the data center suffix for the nodes in each datacenter, you open each node’s properties file in the cassandra-rackdc.properties The file can be found under /etc/scylla/. You set the following parameters for Miami:

# node 1 - 192.0.2.2 (you use the same properties for node #2 (192.0.2.3) and #3 (192.0.2.4))

dc_suffix=_scylla_node_Miami

and for Portland:

# node 4 192.0.2.5

dc_suffix=_scylla_node_Portland

Start the cluster, one node at a time, and then run nodetool status to check connectivity.

nodetool status

Datacenter: us-east1_scylla_node_Miami
======================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                               Rack
UN  192.0.2.2     1.27 MB    256          ?       5b1d864f-a026-4076-bb19-3e7dd693abf1  b
UN  192.0.2.3     954.89 KB  256          ?       783a815e-6e9d-4ab5-a092-bbf15fd76a9f  b
UN  192.0.2.4     1.02 MB    256          ?       1edf5b52-6ae3-41c1-9ec1-c431d34a1aa1  b

Datacenter: us-west1_scylla_node_Portland
======================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                               Rack
UN  192.0.2.5     670.16 KB  256          ?       f0a44a49-0035-4146-8fdc-30e66c037f95  b

Related Topics

Getting Started

PREVIOUS
Configure Scylla
NEXT
Scylla for Developers
ScyllaDB Open Source
  • 5.1
    • master
    • 5.2
    • 5.1
  • Getting Started
    • Install Scylla
      • ScyllaDB Web Installer for Linux
      • Scylla Unified Installer (relocatable executable)
      • Air-gapped Server Installation
      • What is in each RPM
      • Scylla Housekeeping and how to disable it
      • Scylla Developer Mode
      • Scylla Configuration Reference
    • Configure Scylla
    • ScyllaDB Requirements
      • System Requirements
      • OS Support by Platform and Version
      • Scylla in a Shared Environment
    • Migrate to ScyllaDB
      • Migration Process from Cassandra to Scylla
      • Scylla and Apache Cassandra Compatibility
      • Migration Tools Overview
    • Integration Solutions
      • Integrate Scylla with Spark
      • Integrate Scylla with KairosDB
      • Integrate Scylla with Presto
      • Integrate Scylla with Elasticsearch
      • Integrate Scylla with Kubernetes
      • Integrate Scylla with the JanusGraph Graph Data System
      • Integrate Scylla with DataDog
      • Integrate Scylla with Kafka
      • Integrate Scylla with IOTA Chronicle
      • Integrate Scylla with Spring
      • Shard-Aware Kafka Connector for Scylla
      • Install Scylla with Ansible
      • Integrate Scylla with Databricks
    • Tutorials
  • Scylla for Administrators
    • Administration Guide
    • Procedures
      • Cluster Management
      • Backup & Restore
      • Change Configuration
      • Maintenance
      • Best Practices
      • Benchmarking Scylla
      • Migrate from Cassandra to Scylla
      • Disable Housekeeping
    • Security
      • Scylla Security Checklist
      • Enable Authentication
      • Enable and Disable Authentication Without Downtime
      • Generate a cqlshrc File
      • Reset Authenticator Password
      • Enable Authorization
      • Grant Authorization CQL Reference
      • Role Based Access Control (RBAC)
      • Scylla Auditing Guide
      • Encryption: Data in Transit Client to Node
      • Encryption: Data in Transit Node to Node
      • Generating a self-signed Certificate Chain Using openssl
      • Encryption at Rest
      • LDAP Authentication
      • LDAP Authorization (Role Management)
    • Admin Tools
      • Nodetool Reference
      • CQLSh
      • REST
      • Tracing
      • Scylla SStable
      • Scylla Types
      • SSTableLoader
      • cassandra-stress
      • SSTabledump
      • SSTable2json
      • SSTable Index
      • Scylla Logs
      • Seastar Perftune
      • Virtual Tables
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
    • ScyllaDB Manager
    • Upgrade Procedures
      • Scylla Enterprise
      • Scylla Open Source
      • Scylla Open Source to Scylla Enterprise
      • Scylla AMI
    • System Configuration
      • System Configuration Guide
      • scylla.yaml
      • Scylla Snitches
    • Benchmarking Scylla
  • Scylla for Developers
    • Learn To Use Scylla
      • Scylla University
      • Course catalog
      • Scylla Essentials
      • Basic Data Modeling
      • Advanced Data Modeling
      • MMS - Learn by Example
      • Care-Pet an IoT Use Case and Example
    • Scylla Alternator
    • Scylla Features
      • Scylla Open Source Features
      • Scylla Enterprise Features
    • Scylla Drivers
      • Scylla CQL Drivers
      • Scylla DynamoDB Drivers
  • CQL Reference
    • CQLSh: the CQL shell
    • Appendices
    • Compaction
    • Consistency Levels
    • Consistency Level Calculator
    • Data Definition
    • Data Manipulation
    • Data Types
    • Definitions
    • Global Secondary Indexes
    • Additional Information
    • Expiring Data with Time to Live (TTL)
    • Additional Information
    • Functions
    • JSON Support
    • Materialized Views
    • Non-Reserved CQL Keywords
    • Reserved CQL Keywords
    • ScyllaDB CQL Extensions
  • Scylla Architecture
    • Scylla Ring Architecture
    • Scylla Fault Tolerance
    • Consistency Level Console Demo
    • Scylla Anti-Entropy
      • Scylla Hinted Handoff
      • Scylla Read Repair
      • Scylla Repair
    • SSTable
      • Scylla SSTable - 2.x
      • ScyllaDB SSTable - 3.x
    • Compaction Strategies
    • Raft Consensus Algorithm in ScyllaDB
  • Troubleshooting Scylla
    • Errors and Support
      • Report a Scylla problem
      • Error Messages
      • Change Log Level
    • Scylla Startup
      • Ownership Problems
      • Scylla will not Start
      • Scylla Python Script broken
    • Cluster and Node
      • Failed Decommission Problem
      • Cluster Timeouts
      • Node Joined With No Data
      • SocketTimeoutException
      • NullPointerException
    • Data Modeling
      • Scylla Large Partitions Table
      • Scylla Large Rows and Cells Table
      • Large Partitions Hunting
    • Data Storage and SSTables
      • Space Utilization Increasing
      • Disk Space is not Reclaimed
      • SSTable Corruption Problem
      • Pointless Compactions
      • Limiting Compaction
    • CQL
      • Time Range Query Fails
      • COPY FROM Fails
      • CQL Connection Table
      • Reverse queries fail
    • Scylla Monitor and Manager
      • Manager and Monitoring integration
      • Manager lists healthy nodes as down
  • Knowledge Base
    • Upgrading from experimental CDC
    • Compaction
    • Counting all rows in a table is slow
    • CQL Query Does Not Display Entire Result Set
    • When CQLSh query returns partial results with followed by “More”
    • Run Scylla and supporting services as a custom user:group
    • Decoding Stack Traces
    • Snapshots and Disk Utilization
    • DPDK mode
    • Debug your database with Flame Graphs
    • How to Change gc_grace_seconds for a Table
    • Gossip in Scylla
    • Increase Permission Cache to Avoid Non-paged Queries
    • How does Scylla LWT Differ from Apache Cassandra ?
    • Map CPUs to Scylla Shards
    • Scylla Memory Usage
    • NTP Configuration for Scylla
    • Updating the Mode in perftune.yaml After a ScyllaDB Upgrade
    • POSIX networking for Scylla
    • Scylla consistency quiz for administrators
    • Recreate RAID devices
    • How to Safely Increase the Replication Factor
    • Scylla and Spark integration
    • Increase Scylla resource limits over systemd
    • Scylla Seed Nodes
    • How to Set up a Swap Space
    • Scylla Snapshots
    • Scylla payload sent duplicated static columns
    • Stopping a local repair
    • System Limits
    • How to flush old tombstones from a table
    • Time to Live (TTL) and Compaction
    • Scylla Nodes are Unresponsive
    • Update a Primary Key
    • Using the perf utility with Scylla
    • Configure Scylla Networking with Multiple NIC/IP Combinations
  • ScyllaDB University
  • Scylla FAQ
  • Contribute to ScyllaDB
  • Glossary
  • Alternator: DynamoDB API in Scylla
    • Getting Started With ScyllaDB Alternator
    • Scylla Alternator for DynamoDB users
  • Create an issue
  • Edit this page

On this page

  • Scylla Snitches
    • SimpleSnitch
    • RackInferringSnitch
    • GossipingPropertyFileSnitch
    • Ec2Snitch
    • Ec2MultiRegionSnitch
    • GoogleCloudSnitch
Logo
Docs Contact Us About Us
Mail List Icon Slack Icon Forum Icon
© 2023, ScyllaDB. All rights reserved.
Last updated on 31 Mar 2023.
Powered by Sphinx 4.3.2 & ScyllaDB Theme 1.4.2