Gossip in Scylla

Topic: Internals

Audience: Devops professionals, architects

Scylla, like Apache Cassandra, uses a type of protocol called “gossip” to exchange metadata about the identities of nodes in a cluster and whether nodes are up or down. Of course, since there is no single point of failure there can be no single registry of node state, so nodes must share information among themselves.

Gossip protocols are only required in distributed systems so are probably new to most administrators. According to Wikipedia, the ideal gossip protocol has several qualities:

  • Gossip involves periodic, pairwise interactions between nodes
  • Information exchanged between nodes is of bounded size.
  • The state of at least one agent changes to reflect the state of the other.
  • Reliable communication is not assumed.
  • The frequency of the interactions is low compared to typical message latencies so that the protocol costs are negligible.
  • There is some form of randomness in the peer selection.
  • Due to the replication there is an implicit redundancy of the delivered information.

Individual gossip interactions in Scylla, like Apache Cassandra, are relatively infrequent and simple. Each node, once per second, randomly selects 1 to 3 nodes to interact with. One can be any live node, one may be a seed node, and one may be selected from nodes marked as unreachable.

Seed nodes are most likely to be picked for gossip because there are three random selections. A seed node has a chance to be in the first group and then (if a seed was not selected the first time) in the third group.

Each node runs the gossip protocol once per second, but the gossip runs are not synchronized across the cluster.

One round = three messages

One round of gossip consists of three messages. (We’ll call the node initiating the round Node A, and the randomly selected node Node B).

  • Node A sends: gossip_digest_syn
  • Node B sends: gossip_digest_ack
  • Node A replies: gossip_digest_ack2

While the names are borrowed from TCP, gossip does not require making a new TCP connection between nodes.

What are nodes gossiping about?

Nodes exchange a small amount of information about each other. The main two data structures are heart_beat_state and application_state.

A heart_beat_state contains integers for generation and “version number”. The generation is a number that grows each time the node is started, and version number is an ever-increasing integer that covers the version of the application state. ApplicationState contains data on status of components within the node (such as load) and a version number. Each node maintains a map of node IP address and node gossip metadata for all nodes in the cluster including itself.

A round of gossip is designed to minimize the amount of data sent, while resolving any conflicts between the node state data on the two gossiping nodes. In the gossip_digest_syn message, Node A sends a gossip digest: a list of all its known nodes, generations, and versions. Node B compares generation and version to its known nodes, and, in the gossip_digest_ack message, sends any of its own data that differ, along with its own digest. Finally, Node A replies with any state differences between its known state and Node B’s digest.

Scylla gossip implementation

Scylla gossip messages run over the Scylla messaging_service, along with all other inter-node traffic including sending mutations, and streaming of data. Scylla’s messaging_service runs on the Seastar RPC service. Seastar is the scalable software framework for multicore systems that Scylla uses. If no TCP connection is up between a pair of nodes, messaging_service will create a new one. If it is up already, messaging service will use the existing one.

Gossip on multicore

Each Scylla node consists of several independent shards, one per core, which operate on a shared-nothing basis and communicate without locking. Internally, the gossip component, which runs on CPU 0 only, needs to have connections forwarded from other shards. The node state data, shared by gossip, is replicated to the other shards.

The gossip protocol provides important advantages especially for large clusters. Compared to “flooding” information across nodes, it can synchronize data faster, and allow for fast recovery when a new node is down or a node is returned to service. Nodes only mark other nodes as down if an actual failure is detected, but gossip quickly shares the good news of a node coming back up.

References

Cassandra Wiki: ArchitectureGossip

Apple Inc.: Cassandra Internals — Understanding Gossip

Using Gossip Protocols For Failure Detection, Monitoring, Messaging And Other Good Things, by Todd Hoff

Gossip protocol on Wikipedia

Knowledge Base