ScyllaDB University LIVE, FREE Virtual Training Event | March 21
Register for Free
ScyllaDB Documentation Logo Documentation
  • Deployments
    • Cloud
    • Server
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
    • Supported Driver Versions
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Install
Ask AI
ScyllaDB Docs ScyllaDB Manual Troubleshooting ScyllaDB Cluster and Node Handling Node Failures

Handling Node Failures¶

ScyllaDB relies on the Raft consensus algorithm, which requires at least a quorum of nodes in a cluster to be available. If one or more nodes are down, but the quorum is live, reads, writes, schema updates, and topology changes proceed unaffected. When the node that was down is up again, it first contacts the cluster to fetch the latest schema and then starts serving queries.

The following examples show the recovery actions when one or more nodes or DCs are down, depending on the number of nodes and DCs in your cluster.

Examples¶

Cluster A: 1 datacenter, 3 nodes¶

Failure

Consequence

Action to take

1 node

Schema and topology updates are possible and safe.

Try restarting the node. If the node is dead, replace it with a new node.

2 nodes

Data is available for reads and writes; schema and topology changes are impossible.

Restart at least 1 of the 2 nodes that are down to regain quorum. If you can’t recover at least 1 of the 2 nodes, consult the manual recovery section.

Cluster B: 2 datacenters, 6 nodes (3 nodes per DC)¶

Failure

Consequence

Action to take

1-2 nodes

Schema and topology updates are possible and safe.

Try restarting the node(s). If the node is dead, replace it with a new node.

3 nodes

Data is available for reads and writes; schema and topology changes are impossible.

Restart 1 of the 3 nodes that are down to regain quorum. If you can’t recover at least 1 of the 3 failed nodes, consult the manual recovery section.

1DC

Data is available for reads and writes; schema and topology changes are impossible.

When the DCs come back online, restart the nodes. If the DC fails to come back online and the nodes are lost, consult the manual recovery section.

Cluster C: 3 datacenter, 9 nodes (3 nodes per DC)¶

Failure

Consequence

Action to take

1-4 nodes

Schema and topology updates are possible and safe.

Try restarting the nodes. If the nodes are dead, replace them with new nodes.

1 DC

Schema and topology updates are possible and safe.

When the DC comes back online, try restarting the nodes in the cluster. If the nodes are dead, add 3 new nodes in a new region.

2 DCs

Data is available for reads and writes, schema and topology changes are impossible.

When the DCs come back online, restart the nodes. If at least one DC fails to come back online and the nodes are lost, consult the manual recovery section.

Manual Recovery Procedure¶

Note

This recovery procedure assumes that consistent topology changes are enabled for your cluster, which is mandatory in versions 2025.2 and later. If you failed to enable consistent topology changes during the upgrade to 2025.2, you need to follow the previous recovery procedure.

See Verifying that consistent topology changes are enabled.

You can follow the manual recovery procedure when the majority of nodes (for example, 2 out of 3) failed and are irrecoverable.

During the manual recovery procedure you’ll restart live nodes in a special recovery mode, which will cause the cluster to initialize the Raft algorithm from scratch. However, this time, faulty nodes will not participate in the algorithm. Then, you will replace all faulty nodes (using the standard node replacement procedure). Finally, you will leave the recovery mode and remove the obsolete internal Raft data.

Prerequisites

  • Before proceeding, make sure that the irrecoverable nodes are truly dead, and not, for example, temporarily partitioned away due to a network failure. If it is possible for the ‘dead’ nodes to come back to life, they might communicate and interfere with the recovery procedure and cause unpredictable problems.

    If you have no means of ensuring that these irrecoverable nodes won’t come back to life and communicate with the rest of the cluster, setup firewall rules or otherwise isolate your alive nodes to reject any communication attempts from these dead nodes.

  • Ensure all live nodes are in the normal state using nodetool status. If there is a node that is joining or leaving, it cannot be recovered. You must permanently stop it. After performing the recovery procedure, use nodetool status ony any other node. If the stopped node appears in the output, it means that other nodes still consider it a member of the cluster, and you should remove it with the node removal procedure.

  • Check if the cluster lost data. If the number of dead nodes is equal or larger than your keyspaces RF, then some of the data is lost, and you need to retrieve it from backup. After completing the manual recovery procedure restore the data from backup.

  • Decide whether to shut down your service for the manual recovery procedure. ScyllaDB serves data queries during the procedure, however, you may not want to rely on it if:

    • you lost some data, or

    • restarting a single node could lead to unavailability of data queries (the procedure involves a rolling restart). For example, if you are using the standard RF=3, CL=QUORUM setup, you have two datacenters, all nodes in one of the datacenters are dead and one node in the other datacenter is dead, restarting another node in the other datacenter will cause temporary data query unavailability (until the node finishes restarting).

Procedure

  1. Perform a rolling restart of your live nodes.

  2. Find the group 0 ID by performing the following query on any live node, using e.g. cqlsh:

    cqlsh> SELECT value FROM system.scylla_local WHERE key = 'raft_group0_id';
    

    The group 0 ID is needed in the following steps.

  3. Find commit_idx of all live nodes by performing the following query on every live node:

    cqlsh> SELECT commit_idx FROM system.raft WHERE group_id = <group 0 ID>;
    

    Choose a node with the largest commit_idx. If there are multiple such nodes, choose any of them. The chosen node will be the recovery leader.

  4. Perform the following queries on every live node:

    cqlsh> TRUNCATE TABLE system.discovery;
    cqlsh> DELETE value FROM system.scylla_local WHERE key = 'raft_group0_id';
    
  5. Perform a rolling restart of all live nodes, but:

    • restart the recovery leader first,

    • before restarting each node, add the recovery_leader property to its scylla.yaml file and set it to the host ID of the recovery leader,

    • after restarting each node, make sure it participated in Raft recovery; look for one of the following messages in its logs:

    storage_service - Performing Raft-based recovery procedure with recovery leader <host ID of the recovery leader>/<IP address of the recovery leader>
    storage_service - Raft-based recovery procedure - found group 0 with ID <ID of the new group 0; different from the one used in other steps>
    

    After completing this step, Raft should be fully functional.

  6. Replace all dead nodes in the cluster using the node replacement procedure.

    Note

    Removing some of the dead nodes with the node removal procedure is also possible, but it may require decreasing RF of your keyspaces. With tablets enabled, nodetool removenode is rejected if there are not enough nodes to satisfy RF of any tablet keyspace in the node’s datacenter.

  7. Remove the recovery_leader property from the scylla.yaml file on all nodes. Send the SIGHUP signal to all ScyllaDB processes to ensure the change is applied.

  8. Perform the following queries on every live node:

    cqlsh> DELETE FROM system.raft WHERE group_id = <group 0 ID>;
    cqlsh> DELETE FROM system.raft_snapshots WHERE group_id = <group 0 ID>;
    cqlsh> DELETE FROM system.raft_snapshot_config WHERE group_id = <group 0 ID>;
    

Was this page helpful?

PREVIOUS
Cluster and Node
NEXT
Failure to Add, Remove, or Replace a Node
  • Create an issue
  • Edit this page

On this page

  • Handling Node Failures
    • Examples
    • Manual Recovery Procedure
ScyllaDB Manual
  • 2025.3
    • master
    • 2025.4
    • 2025.3
    • 2025.2
    • 2025.1
  • Getting Started
    • Install ScyllaDB
      • Launch ScyllaDB on AWS
      • Launch ScyllaDB on GCP
      • Launch ScyllaDB on Azure
      • ScyllaDB Web Installer for Linux
      • Install ScyllaDB Linux Packages
      • Install scylla-jmx Package
      • Run ScyllaDB in Docker
      • Install ScyllaDB Without root Privileges
      • Air-gapped Server Installation
      • ScyllaDB Housekeeping and how to disable it
      • ScyllaDB Developer Mode
    • Configure ScyllaDB
    • ScyllaDB Configuration Reference
    • ScyllaDB Requirements
      • System Requirements
      • OS Support
      • Cloud Instance Recommendations
      • ScyllaDB in a Shared Environment
    • Migrate to ScyllaDB
      • Migration Process from Cassandra to ScyllaDB
      • ScyllaDB and Apache Cassandra Compatibility
      • Migration Tools Overview
    • Integration Solutions
      • Integrate ScyllaDB with Spark
      • Integrate ScyllaDB with KairosDB
      • Integrate ScyllaDB with Presto
      • Integrate ScyllaDB with Elasticsearch
      • Integrate ScyllaDB with Kubernetes
      • Integrate ScyllaDB with the JanusGraph Graph Data System
      • Integrate ScyllaDB with DataDog
      • Integrate ScyllaDB with Kafka
      • Integrate ScyllaDB with IOTA Chronicle
      • Integrate ScyllaDB with Spring
      • Shard-Aware Kafka Connector for ScyllaDB
      • Install ScyllaDB with Ansible
      • Integrate ScyllaDB with Databricks
      • Integrate ScyllaDB with Jaeger Server
      • Integrate ScyllaDB with MindsDB
  • ScyllaDB for Administrators
    • Administration Guide
    • Procedures
      • Cluster Management
      • Backup & Restore
      • Change Configuration
      • Maintenance
      • Best Practices
      • Benchmarking ScyllaDB
      • Migrate from Cassandra to ScyllaDB
      • Disable Housekeeping
    • Security
      • ScyllaDB Security Checklist
      • Enable Authentication
      • Enable and Disable Authentication Without Downtime
      • Creating a Custom Superuser
      • Generate a cqlshrc File
      • Reset Authenticator Password
      • Enable Authorization
      • Grant Authorization CQL Reference
      • Certificate-based Authentication
      • Role Based Access Control (RBAC)
      • ScyllaDB Auditing Guide
      • Encryption: Data in Transit Client to Node
      • Encryption: Data in Transit Node to Node
      • Generating a self-signed Certificate Chain Using openssl
      • Configure SaslauthdAuthenticator
      • Encryption at Rest
      • LDAP Authentication
      • LDAP Authorization (Role Management)
    • Admin Tools
      • Nodetool Reference
      • CQLSh
      • Admin REST API
      • Tracing
      • ScyllaDB SStable
      • ScyllaDB Types
      • SSTableLoader
      • cassandra-stress
      • SSTabledump
      • SSTableMetadata
      • ScyllaDB Logs
      • Seastar Perftune
      • Virtual Tables
      • Reading mutation fragments
      • Maintenance socket
      • Maintenance mode
      • Task manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
    • ScyllaDB Manager
    • Upgrade Procedures
      • About Upgrade
      • Upgrade Guides
    • System Configuration
      • System Configuration Guide
      • scylla.yaml
      • ScyllaDB Snitches
    • Benchmarking ScyllaDB
    • ScyllaDB Diagnostic Tools
  • ScyllaDB for Developers
    • Develop with ScyllaDB
    • Tutorials and Example Projects
    • Learn to Use ScyllaDB
    • ScyllaDB Alternator
    • ScyllaDB Drivers
      • ScyllaDB CQL Drivers
      • ScyllaDB DynamoDB Drivers
  • CQL Reference
    • CQLSh: the CQL shell
    • Reserved CQL Keywords and Types (Appendices)
    • Compaction
    • Consistency Levels
    • Consistency Level Calculator
    • Data Definition
    • Data Manipulation
      • SELECT
      • INSERT
      • UPDATE
      • DELETE
      • BATCH
    • Data Types
    • Definitions
    • Global Secondary Indexes
    • Expiring Data with Time to Live (TTL)
    • Functions
    • Wasm support for user-defined functions
    • JSON Support
    • Materialized Views
    • DESCRIBE SCHEMA
    • Service Levels
    • ScyllaDB CQL Extensions
  • Alternator: DynamoDB API in Scylla
    • Getting Started With ScyllaDB Alternator
    • ScyllaDB Alternator for DynamoDB users
    • Alternator-specific APIs
  • Features
    • Lightweight Transactions
    • Global Secondary Indexes
    • Local Secondary Indexes
    • Materialized Views
    • Counters
    • Change Data Capture
      • CDC Overview
      • The CDC Log Table
      • Basic operations in CDC
      • CDC Streams
      • CDC Stream Generations
      • Querying CDC Streams
      • Advanced column types
      • Preimages and postimages
      • Data Consistency in CDC
    • Workload Attributes
    • Workload Prioritization
    • Backup and Restore
  • ScyllaDB Architecture
    • Data Distribution with Tablets
    • ScyllaDB Ring Architecture
    • ScyllaDB Fault Tolerance
    • Consistency Level Console Demo
    • ScyllaDB Anti-Entropy
      • ScyllaDB Hinted Handoff
      • ScyllaDB Read Repair
      • ScyllaDB Repair
    • SSTable
      • ScyllaDB SSTable - 2.x
      • ScyllaDB SSTable - 3.x
    • Compaction Strategies
    • Raft Consensus Algorithm in ScyllaDB
    • Zero-token Nodes
  • Troubleshooting ScyllaDB
    • Errors and Support
      • Report a ScyllaDB problem
      • Error Messages
      • Change Log Level
    • ScyllaDB Startup
      • Ownership Problems
      • ScyllaDB will not Start
      • ScyllaDB Python Script broken
    • Upgrade
      • Inaccessible configuration files after ScyllaDB upgrade
    • Cluster and Node
      • Handling Node Failures
      • Failure to Add, Remove, or Replace a Node
      • Failed Decommission Problem
      • Cluster Timeouts
      • Node Joined With No Data
      • NullPointerException
      • Failed Schema Sync
    • Data Modeling
      • ScyllaDB Large Partitions Table
      • ScyllaDB Large Rows and Cells Table
      • Large Partitions Hunting
      • Failure to Update the Schema
    • Data Storage and SSTables
      • Space Utilization Increasing
      • Disk Space is not Reclaimed
      • SSTable Corruption Problem
      • Pointless Compactions
      • Limiting Compaction
    • CQL
      • Time Range Query Fails
      • COPY FROM Fails
      • CQL Connection Table
    • ScyllaDB Monitor and Manager
      • Manager and Monitoring integration
      • Manager lists healthy nodes as down
    • Installation and Removal
      • Removing ScyllaDB on Ubuntu breaks system packages
  • Knowledge Base
    • Upgrading from experimental CDC
    • Compaction
    • Consistency in ScyllaDB
    • Counting all rows in a table is slow
    • CQL Query Does Not Display Entire Result Set
    • When CQLSh query returns partial results with followed by “More”
    • Run ScyllaDB and supporting services as a custom user:group
    • Customizing CPUSET
    • Decoding Stack Traces
    • Snapshots and Disk Utilization
    • DPDK mode
    • Debug your database with Flame Graphs
    • Efficient Tombstone Garbage Collection in ICS
    • How to Change gc_grace_seconds for a Table
    • Gossip in ScyllaDB
    • Increase Permission Cache to Avoid Non-paged Queries
    • How does ScyllaDB LWT Differ from Apache Cassandra ?
    • Map CPUs to ScyllaDB Shards
    • ScyllaDB Memory Usage
    • NTP Configuration for ScyllaDB
    • Updating the Mode in perftune.yaml After a ScyllaDB Upgrade
    • POSIX networking for ScyllaDB
    • ScyllaDB consistency quiz for administrators
    • Recreate RAID devices
    • How to Safely Increase the Replication Factor
    • ScyllaDB and Spark integration
    • Increase ScyllaDB resource limits over systemd
    • ScyllaDB Seed Nodes
    • How to Set up a Swap Space
    • ScyllaDB Snapshots
    • ScyllaDB payload sent duplicated static columns
    • Stopping a local repair
    • System Limits
    • How to flush old tombstones from a table
    • Time to Live (TTL) and Compaction
    • ScyllaDB Nodes are Unresponsive
    • Update a Primary Key
    • Using the perf utility with ScyllaDB
    • Configure ScyllaDB Networking with Multiple NIC/IP Combinations
  • Reference
    • AWS Images
    • Azure Images
    • GCP Images
    • Configuration Parameters
    • Glossary
    • Limits
    • API Reference
      • Authorization Cache
      • Cache Service
      • Collectd
      • Column Family
      • Commit Log
      • Compaction Manager
      • Endpoint Snitch Info
      • Error Injection
      • Failure Detector
      • Gossiper
      • Hinted Handoff
      • LSA
      • Messaging Service
      • Raft
      • Storage Proxy
      • Storage Service
      • Stream Manager
      • System
      • Task Manager Test
      • Task Manager
      • Tasks
    • Metrics
  • ScyllaDB FAQ
  • 2024.2 and earlier documentation
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 05 Dec 2025.
Powered by Sphinx 7.4.7 & ScyllaDB Theme 1.8.9
Ask AI