cqlsh is a command line shell for interacting with Cassandra through CQL (the Cassandra Query Language). It is shipped with every Cassandra package, and can be found in the bin/ directory alongside the Cassandra executable. cqlsh utilizes the Python native protocol driver, and connects to the single node specified on the command line.
cqlsh is compatible with Python 2.7.
In general, a given version of cqlsh is only guaranteed to work with the version of Cassandra that it was released with. In some cases, cqlsh may work with older or newer versions of Cassandra, but this is not officially supported.
cqlsh ships with all essential dependencies. However, there are some optional dependencies that can be installed to improve the capabilities of cqlsh.
By default, cqlsh displays all timestamps with a UTC timezone. To support display of timestamps with another timezone,
the pytz library must be installed. See the
timezone option in cqlshrc for
specifying a timezone to use.
cqlshrc file holds configuration options for cqlsh. By default this is in the user’s home directory at
~/.cassandra/cqlsh, but a custom location can be specified with the
Example config values and documentation can be found in the
conf/cqlshrc.sample file of a tarball installation. You
can also view the latest version of cqlshrc online.
cqlsh [options] [host [port]]
- Force color output
- Disable color output
- Specify the browser to use for displaying cqlsh help. This can be one of the supported browser names (e.g.
firefox) or a browser path followed by
- Use SSL when connecting to Cassandra
- Username to authenticate against Cassandra with
- Password to authenticate against Cassandra with, should
be used in conjunction with
- Keyspace to authenticate to, should be used in conjunction
- Execute commands from the given file, then exit
- Print additional debugging information
- Specify a non-default encoding for output (defaults to UTF-8)
- Specify a non-default location for the
- Execute the given statement, then exit
- Specify the connection timeout in seconds (defaults to 2s)
- Specify the request timeout in seconds (defaults to 10s)
- Force tty mode (command prompt)
In addition to supporting regular CQL statements, cqlsh also supports a number of special commands that are not part of CQL. These are detailed below.
CONSISTENCY <consistency level>
Sets the consistency level for operations to follow. Valid arguments include:
- ANY - ONE - TWO - THREE - QUORUM - ALL - LOCAL_QUORUM - LOCAL_ONE - SERIAL - LOCAL_SERIAL
SERIAL CONSISTENCY <consistency level>
Sets the serial consistency level for operations to follow. Valid arguments include:
The serial consistency level is only used by conditional updates (
DELETE with an
condition). For those, the serial consistency level defines the consistency level of the serial phase (or “paxos” phase)
while the normal consistency level defines the consistency for the “learn” phase, i.e. what type of reads will be
guaranteed to see the update right away. For example, if a conditional write has a consistency level of
is successful), then a
QUORUM read is guaranteed to see that write. But if the regular consistency level of that
ANY, then only a read with a consistency level of
SERIAL is guaranteed to see it (even a read with
ALL is not guaranteed to be enough).
Prints the cqlsh, Cassandra, CQL, and native protocol versions in use. Example:
cqlsh> SHOW VERSION [cqlsh 5.0.1 | Cassandra 3.8 | CQL spec 3.4.2 | Native protocol v4]
Prints the IP address and port of the Cassandra node that cqlsh is connected to in addition to the cluster name.
cqlsh> SHOW HOST
Connected to Prod_Cluster at 192.0.0.1:9042.
Pretty prints a specific tracing session.
SHOW SESSION <session id>
cqlsh> SHOW SESSION 95ac6470-327e-11e6-beca-dfb660d92ad8
Tracing session: 95ac6470-327e-11e6-beca-dfb660d92ad8 activity | timestamp | source | source_elapsed | client -----------------------------------------------------------+----------------------------+-----------+----------------+----------- Execute CQL3 query | 2016-06-14 17:23:13.979000 | 127.0.0.1 | 0 | 127.0.0.1 Parsing SELECT * FROM system.local; [SharedPool-Worker-1] | 2016-06-14 17:23:13.982000 | 127.0.0.1 | 3843 | 127.0.0.1 ...
Reads the contents of a file and executes each line as a CQL statement or special cqlsh command.
SOURCE <string filename>
cqlsh> SOURCE '/home/thobbs/commands.cql'
Begins capturing command output and appending it to a specified file. Output will not be shown at the console while it is captured.
CAPTURE '<file>'; CAPTURE OFF; CAPTURE;
That is, the path to the file to be appended to must be given inside a string literal. The path is interpreted relative
to the current working directory. The tilde shorthand notation (
'~/mydir') is supported for referring to
Only query result output is captured. Errors and output from cqlsh-only commands will still be shown in the cqlsh session.
To stop capturing output and show it in the cqlsh session again, use
To inspect the current capture configuration, use
CAPTURE with no arguments.
Gives information about cqlsh commands. To see available topics, enter
HELP without any arguments. To see help on a
HELP <topic>. Also see the
--browser argument for controlling what browser is used to display help.
Enables or disables tracing for queries. When tracing is enabled, once a query completes, a trace of the events during the query will be printed.
TRACING ON TRACING OFF
Enables paging, disables paging, or sets the page size for read queries. When paging is enabled, only one page of data will be fetched at a time and a prompt will appear to fetch the next page. Generally, it’s a good idea to leave paging enabled in an interactive session to avoid fetching and printing large amounts of data at once.
PAGING ON PAGING OFF PAGING <page size in rows>
Enables or disables vertical printing of rows. Enabling
EXPAND is useful when many columns are fetched, or the
contents of a single column are large.
EXPAND ON EXPAND OFF
Authenticate as a specified Cassandra user for the current session.
LOGIN <username> [<password>]
Ends the current session and terminates the cqlsh process.
Clears the console.
Prints a description (typically a series of DDL statements) of a schema element or the cluster. This is useful for dumping all or portions of the schema.
DESCRIBE CLUSTER DESCRIBE SCHEMA DESCRIBE KEYSPACES DESCRIBE KEYSPACE <keyspace name> DESCRIBE TABLES DESCRIBE TABLE <table name> DESCRIBE MATERIALIZED VIEW <view name> DESCRIBE TYPES DESCRIBE TYPE <type name> DESCRIBE FUNCTIONS DESCRIBE FUNCTION <function name> DESCRIBE AGGREGATES DESCRIBE AGGREGATE <aggregate function name>
In any of the commands,
DESC may be used in place of
DESCRIBE CLUSTER command prints the cluster name and partitioner:
cqlsh> DESCRIBE CLUSTER Cluster: Test Cluster Partitioner: Murmur3Partitioner
DESCRIBE SCHEMA command prints the DDL statements needed to recreate the entire schema. This is especially
useful for dumping the schema in order to clone a cluster or restore from a backup.
Copies data from a table to a CSV file.
COPY <table name> [(<column>, ...)] TO <file name> WITH <copy option> [AND <copy option> ...]
If no columns are specified, all columns from the table will be copied to the CSV file. A subset of columns to copy may be specified by adding a comma-separated list of column names surrounded by parenthesis after the table name.
<file name> should be a string literal (with single quotes) representing a path to the destination file. The
file name can also contain the special value
STDOUT (without single quotes) print the CSV to STDOUT.
See Shared COPY Options for options that apply to both
COPY TO and
Options for COPY TO¶
- The maximum number token ranges to fetch simultaneously. Defaults to 6.
- The number of rows to fetch in a single page. Defaults to 1000.
- By default the page timeout is 10 seconds per 1000 entries in the page size or 10 seconds if pagesize is smaller.
- Token range to export. Defaults to exporting the full ring.
- The maximum size of the output file measured in number of lines; beyond this maximum the output file will be split into segments. -1 means unlimited, and is the default.
- The encoding used for characters. Defaults to
Copies data from a CSV file to table.
COPY <table name> [(<column>, ...)] FROM <file name> WITH <copy option> [AND <copy option> ...]
If no columns are specified, all columns from the CSV file will be copied to the table. A subset of columns to copy may be specified by adding a comma-separated list of column names surrounded by parenthesis after the table name.
<file name> should be a string literal (with single quotes) representing a path to the
source file. The file name can also contain the special value``STDIN`` (without single quotes)
to read the CSV data from STDIN.
See Shared COPY Options for options that apply to both
COPY TO and
Options for COPY FROM¶
- The maximum number of rows to process per second. Defaults to 100000.
- The maximum number of rows to import. -1 means unlimited, and is the default.
- A number of initial rows to skip. Defaults to 0.
- A comma-separated list of column names to ignore. By default, no columns are skipped.
- The maximum global number of parsing errors to ignore. -1 means unlimited, and is the default.
- The maximum global number of insert errors to ignore. -1 means unlimited. The default is 1000.
- A file to store all rows that could not be imported, by default this is
<ks>is your keyspace and
<table>is your table name.
- The max number of rows inserted in a single batch. Defaults to 20.
- The min number of rows inserted in a single batch. Defaults to 2.
- The number of rows that are passed to child worker processes from the main process at a time. Defaults to 1000.