Scylla Documentation Logo Documentation
  • Server
    • Scylla Open Source
    • Scylla Enterprise
    • Scylla Alternator
  • Cloud
    • Scylla Cloud
    • Scylla Cloud Docs
  • Tools
    • Scylla Manager
    • Scylla Monitoring Stack
    • Scylla Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
Download
Menu
Scylla Scylla Architecture Scylla SSTable Format Scylla SSTable - 3.x SSTables 3.0 Statistics File Format

SSTables 3.0 Statistics File Format¶

New in version 3.0.

This file stores metadata for SSTable. There are 4 types of metadata:

  1. Validation metadata - used to validate SSTable correctness.

  2. Compaction metadata - used for compaction.

  3. Statistics - some information about SSTable which is loaded into memory and used for faster reads/compactions.

  4. Serialization header - keeps information about SSTable schema.

General structure¶

The file is composed of two parts. First part is a table of content which allows quick access to a selected metadata. Second part is a sequence of metadata stored one after the other. Let’s define array template that will be used in this document.

struct array<LengthType, ElementType> {
    LengthType number_of_elements;
    ElementType elements[number_of_elements];
}

Table of content

using toc = array<be32<int32_t>, toc_entry>;

struct toc_entry {
    // Type of metadata
    // | Type                 | Integer representation |
    // |----------------------|------------------------|
    // | Validation metadata  | 0                      |
    // | Compaction metadata  | 1                      |
    // | Statistics           | 2                      |
    // | Serialization header | 3                      |
    be32<int32_t> type;
    // Offset, in the file, at which this metadata entry starts
    be32<int32_t> offset;
}

The toc array is sorted by the type field of its members.

Validation metadata entry¶

struct validation_metadata {
    // Name of partitioner used to create this SSTable.
    // Represented by UTF8 string encoded using modified UTF-8 encoding.
    // You can read more about this encoding in:
    // https://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#modified-utf-8
    // https://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#readUTF()
    Modified_UTF-8_String partitioner_name;
    // The probability of false positive matches in the bloom filter for this SSTable
    be64<double> bloom_filter_fp_chance;
}

Compaction metadata entry¶

// Serialized HyperLogLogPlus which can be used to estimate the number of partition keys in the SSTable.
// If this is not present then the same estimation can be computed using Summary file.
// Encoding is described in:
// https://github.com/addthis/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/cardinality/HyperLogLogPlus.java
using compaction_metadata = array<be32<int32_t>, be8>;

Statistics entry¶

This entry contains some parts of EstimatedHistogram, StreamingHistogram and CommitLogPosition types. Let’s have a look at them first.

EstimatedHistogram¶

// Each bucket represents values from (previous bucket offset, current offset].
// Offset for last bucket is +inf.
using estimated_histogram = array<be32<int32_t>, bucket>;

struct bucket {
    // Offset of the previous bucket
    // In the first bucket this is offset of the first bucket itself because there's no previous bucket.
    // The offset of the first bucket is repeated in second bucket as well.
    be64<int64_t> prev_bucket_offset;
    // This bucket value
    be64<int64_t> value;
}

StreamingHistogram¶

struct streaming_histogram {
    // Maximum number of buckets this historgam can have
    be32<int32_t> bucket_number_limit;
    array<be32<int32_t>, bucket> buckets;
}

struct bucket {
    // Offset of this bucket
    be64<double> offset;
    // Bucket value
    be64<int64_t> value;
}

CommitLogPosition¶

struct commit_log_position {
    be64<int64_t> segment_id;
    be32<int32_t> position_in_segment;
}

Whole entry¶

struct statistics {
    // In bytes, uncompressed sizes of partitions
    estimated_histogram partition_sizes;
    // Number of cells per partition
    estimated_histogram column_counts;
    commit_log_position commit_log_upper_bound;
    // Typically in microseconds since the unix epoch, although this is not enforced
    be64<int64_t> min_timestamp;
    // Typically in microseconds since the unix epoch, although this is not enforced
    be64<int64_t> max_timestamp;
    // In seconds since the unix epoch
    be32<int32_t> min_local_deletion_time;
    // In seconds since the unix epoch
    be32<int32_t> max_local_deletion_time;
    be32<int32_t> min_ttl;
    be32<int32_t> max_ttl;
    // compressed_size / uncompressed_size
    be64<double> compression_rate;
    // Histogram of cell tombstones.
    // Keys are local deletion times of tombstones
    streaming_histogram tombstones;
    be32<int32_t> level;
    // The difference, measured in milliseconds, between repair time and midnight, January 1, 1970 UTC
    be64<int64_t> repaired_at;
    // Minimum and Maximum clustering key prefixes present in the SSTable (valid since the "md" SSTable format).
    // Note that:
    // - Clustering rows always have the full clustering key.
    // - Range tombstones may have a partial clustering key prefix.
    // - Partition tombstones implicitly apply to the full, unbound clustering range.
    // Therefore, an empty (min|max)_clustering_key denotes a respective unbound range,
    // derived either from an open-ended range tombstone, or from a partition tombstone.
    clustering_bound min_clustering_key;
    clustering_bound max_clustering_key;
    be8<bool> has_legacy_counters;
    be64<int64_t> number_of_columns;
    be64<int64_t> number_of_rows;

    // Version MA of SSTable 3.x format ends here.
    // It contains only one commit log position interval - [NONE = new CommitLogPosition(-1, 0), upper bound of commit log]

    commit_log_position commit_log_lower_bound;

    // Version MB of SSTable 3.x format ends here.
    // It contains only one commit log position interval - [lower bound of commit log, upper bound of commit log].

    array<be32<int32_t>, commit_log_interval> commit_log_intervals;
}

using clustering_bound = array<be32<int32_t>, clustering_column>;
using clustering_column = array<be16<uint16_t>, be8>;

struct commit_log_interval {
    commit_log_position start;
    commit_log_position end;
}

Serialization header¶

struct serialization_header {
    vint<uint64_t> min_timestamp;
    vint<uint32_t> min_local_deletion_time;
    vint<uint32_t> min_ttl;
    // If partition key has one column then this is the type of this column.
    // Otherwise, this is a CompositeType that contains types of all partition key columns.
    type partition_key_type;
    array<vint<uint32_t>, type> clustering_key_types;
    columns static_columns;
    columns regular_columns;
}

using columns = array<vint<uint32_t>, column>;

struct column {
    array<vint<uint32_t>, be8> name;
    type column_type;
}

// UTF-8 string
using type = array<vint<uint_32_t>, be8>;

Type encoding¶

Type is just a byte buffer with an unsigned variant integer (32-bit) length. It is a UTF-8 string. All leading spaces, tabs and newlines are skipped. Null or empty string is a bytes type. First segment of non-blank characters should contain only alphanumerical characters and special chars like '-', '+', '.', '_', '&'. This is the name of the type. If type name does not contain any ‘.’ then it gets “org.apache.cassandra.db.marshal.” prepended to itself. Then an “instance” static field is taken from this class. If the first non-blank character that follows type name is ‘(’ then “getInstance” static method is invoked instead. Remaining string is passed to this method as a parameter. There are following types:

Type

Parametrized

Ascii Type

No

Boolean Type

No

Bytes Type

No

Byte Type

No

ColumnToCollection Type

Yes

Composite Type

Yes

CounterColumn Type

No

Date Type

No

Decimal Type

No

Double Type

No

Duration Type

No

DynamicComposite Type

Yes

Empty Type

No

Float Type

No

Frozen Type

Yes

InetAddress Type

No

Int32 Type

No

Integer Type

No

LexicalUUID Type

No

List Type

Yes

Long Type

No

Map Type

Yes

PartitionerDefinedOrder

Yes

Reversed Type

Yes

Set Type

Yes

Short Type

No

SimpleDate Type

No

Timestamp Type

No

Time Type

No

TimeUUID Type

No

Tuple Type

Yes

User Type

Yes

UTF8 Type

No

UUID Type

No

Copyright

© 2016, The Apache Software Foundation.

Apache®, Apache Cassandra®, Cassandra®, the Apache feather logo and the Apache Cassandra® Eye logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by The Apache Software Foundation is implied by the use of these marks.

PREVIOUS
SSTables 3.0 Data File Format
NEXT
SSTables 3.0 Summary File Format
  • Getting Started
    • Install Scylla
      • Scylla Unified Installer (relocatable executable)
      • Air-gapped Server Installation
      • What is in each RPM
      • Scylla Housekeeping and how to disable it
      • Scylla Developer Mode
      • Scylla Configuration Reference
    • Configure Scylla
    • Scylla Requirements
      • System Requirements
      • OS Support by Platform and Version
      • Scylla in a Shared Environment
    • Cassandra Query Language (CQL)
      • CQLSh the CQL shell
      • Data Definition
      • Data Manipulation
      • Expiring Data with Time to Live (TTL)
      • Additional Information
      • Security
      • Data Types
      • Appendices
      • Definitions
      • Materialized Views
      • Functions
      • JSON
      • Global Secondary Indexes
      • Additional Information
      • Compaction
      • Consistency Levels
      • Reserved Keywords
      • Non-reserved Keywords
    • CQLSh: the CQL shell
    • Scylla Drivers
      • Scylla CQL Drivers
      • Scylla DynamoDB Drivers
    • Migrate to Scylla
      • Migration Process from Cassandra to Scylla
      • Scylla and Apache Cassandra Compatibility
      • Migration Tools Overview
    • Integration Solutions
      • Integrate Scylla with Spark
      • Integrate Scylla with KairosDB
      • Integrate Scylla with Presto
      • Integrate Scylla with Elasticsearch
      • Integrate Scylla with Kubernetes
      • Integrate Scylla with the JanusGraph Graph Data System
      • Integrate Scylla with DataDog
      • Integrate Scylla with Kafka
      • Integrate Scylla with IOTA Chronicle
      • Integrate Scylla with Spring
      • Shard-Aware Kafka Connector for Scylla
      • Install Scylla with Ansible
      • Integrate Scylla with Databricks
    • Tutorials
  • Scylla for Administrators
    • Administration Guide
    • Procedures
      • Cluster Management
      • Backup & Restore
      • Change Configuration
      • Maintenance
      • Best Practices
      • Benchmarking Scylla
      • Migrate from Cassandra to Scylla
      • Disable Housekeeping
    • Security
      • Scylla Security Checklist
      • Enable Authentication
      • Enable and Disable Authentication Without Downtime
      • Generate a cqlshrc File
      • Reset Authenticator Password
      • Enable Authorization
      • Grant Authorization CQL Reference
      • Role Based Access Control (RBAC)
      • Scylla Auditing Guide
      • Encryption: Data in Transit Client to Node
      • Encryption: Data in Transit Node to Node
      • Generating a self-signed Certificate Chain Using openssl
      • Encryption at Rest
      • LDAP Authentication
      • LDAP Authorization (Role Management)
    • Admin Tools
      • Nodetool Reference
      • CQLSh
      • REST
      • Tracing
      • scylla-sstable
      • SSTableLoader
      • cassandra-stress
      • SSTabledump
      • SSTable2json
      • SSTable Index
      • Scylla Logs
      • Seastar Perftune
    • Scylla Manager
      • Scylla Manager Docs
      • Upgrade Scylla Manager
      • Monitoring Support Matrix
    • Scylla Monitoring Stack
      • Latest Version
      • Upgrade Scylla Monitoring Stack
      • Monitoring Support Matrix
    • Scylla Operator
    • Upgrade Procedures
      • Scylla Enterprise
      • Scylla Open Source
      • Scylla Open Source to Scylla Enterprise
      • Scylla Manager
      • Scylla Monitoring
      • Scylla AMI
    • System Configuration
      • System Configuration Guide
      • scylla.yaml
      • Scylla Snitches
    • Benchmarking Scylla
  • Scylla for Developers
    • Learn To Use Scylla
      • Scylla University
      • Course catalog
      • Scylla Essentials
      • Basic Data Modeling
      • Advanced Data Modeling
      • MMS - Learn by Example
      • Care-Pet an IoT Use Case and Example
    • CQLSh
    • Apache Cassandra Query Language (CQL)
    • Scylla Alternator
    • Scylla Features
      • Scylla Open Source Features
      • Scylla Enterprise Features
    • Scylla Drivers
      • Scylla CQL Drivers
      • Scylla DynamoDB Drivers
  • Scylla Architecture
    • Scylla Ring Architecture
    • Scylla Fault Tolerance
    • Consistency Level Console Demo
    • Scylla Anti-Entropy
      • Scylla Hinted Handoff
      • Scylla Read Repair
      • Scylla Repair
    • SSTable
      • Scylla SSTable - 2.x
      • Scylla SSTable - 3.x
    • Compaction Strategies
  • Troubleshooting Scylla
    • Errors and Support
      • Report a Scylla problem
      • Error Messages
      • Change Log Level
    • Scylla Startup
      • Ownership Problems
      • Scylla will not Start
      • Scylla Python Script broken
    • Cluster and Node
      • Failed Decommission Problem
      • Cluster Timeouts
      • Node Joined With No Data
      • SocketTimeoutException
      • NullPointerException
    • Data Modeling
      • Scylla Large Partitions Table
      • Scylla Large Rows and Cells Table
      • Large Partitions Hunting
    • Data Storage and SSTables
      • Space Utilization Increasing
      • Disk Space is not Reclaimed
      • SSTable Corruption Problem
      • Pointless Compactions
      • Limiting Compaction
    • CQL
      • Time Range Query Fails
      • COPY FROM Fails
      • CQL Connection Table
      • Reverse queries fail
    • Scylla Monitor and Manager
      • Manager and Monitoring integration
      • Manager lists healthy nodes as down
  • Knowledge Base
    • Upgrading from experimental CDC
    • Compaction
    • Counting all rows in a table is slow
    • CQL Query Does Not Display Entire Result Set
    • When CQLSh query returns partial results with followed by “More”
    • Run Scylla and supporting services as a custom user:group
    • Decoding Stack Traces
    • Snapshots and Disk Utilization
    • DPDK mode
    • Debug your database with Flame Graphs
    • How to Change gc_grace_seconds for a Table
    • Gossip in Scylla
    • Increase Permission Cache to Avoid Non-paged Queries
    • How does Scylla LWT Differ from Apache Cassandra ?
    • Map CPUs to Scylla Shards
    • Scylla Memory Usage
    • NTP Configuration for Scylla
    • POSIX networking for Scylla
    • Scylla consistency quiz for administrators
    • Recreate RAID devices
    • How to Safely Increase the Replication Factor
    • Scylla and Spark integration
    • Increase Scylla resource limits over systemd
    • Scylla Seed Nodes
    • How to Set up a Swap Space
    • Scylla Snapshots
    • Stopping a local repair
    • System Limits
    • How to flush old tombstones from a table
    • Time to Live (TTL) and Compaction
    • Scylla Nodes are Unresponsive
    • Update a Primary Key
    • Using the perf utility with Scylla
    • Configure Scylla Networking with Multiple NIC/IP Combinations
  • Scylla University
  • Scylla FAQ
  • Contribute to Scylla
  • Glossary
  • Create an issue

On this page

  • SSTables 3.0 Statistics File Format
    • General structure
    • Validation metadata entry
    • Compaction metadata entry
    • Statistics entry
      • EstimatedHistogram
      • StreamingHistogram
      • CommitLogPosition
      • Whole entry
    • Serialization header
      • Type encoding
Logo
Docs Contact Us About Us
Mail List Icon Slack Icon
© 2022, ScyllaDB. All rights reserved.
Last updated on 22 September 2021.
Powered by Sphinx 4.3.2 & ScyllaDB Theme 1.2.1