Scylla Cloud allows you to connect your application’s private network directly to the Scylla clusters’ private network by using GCP’s VPC peering. For more information on GCP’s VPC peering and its security advantages, please read the VPC Networking Overview.
If you are running Scylla Cloud on AWS, refer to these instructions.
VPC peering is set only at the cluster creation stage and cannot be configured on an existing cluster.
VPC peering is a mandatory setting for multi Data Center (DC) deployments.
The procedure that follows includes instructions for setting up Virtual Private Cloud (VPC) peering to connect your Scylla Cluster to your application on Google Cloud Platform (GCP). This will require you to have access to your instances on GCP and to create a cluster with VPC peering in Scylla Cloud.
Before You Begin
Verify that you have access to your GCP Console, and your user has view/edit permissions for the VPC Peering settings.
From the right-side menu, click Add New Cluster
In the Provider section, select Google Cloud.
In the Where to Deploy section, choose Scylla Account.
In the Details section, enter the following information:
Cluster Name - human-readable text to help you identify your cluster.
Allowed IPs - list the IP addresses you want to permit to connect to your cluster.
Select Enable VPC Peering.
In the Cluster Network field, enter your cluster’s network IP address. By default, the cluster’s IP/CIDR is displayed. You can change it to a different IP/CIDR.
Scroll down the page and continue with the cluster creation process. Choose the type of instances you want to use, the number of nodes, the RF, and any other additional features you want to purchase.
When you’re finished choosing all the options you want, click Launch Cluster.
It will take a few minutes for your cluster to launch. When it is ready you will see a large green checkmark. At the bottom of the screen, click Setup VPC Peering. The VPC Peering wizard will open and you can complete the VPC Peering setup
This procedure is done only after you have successfully launched a cluster as described in Launch a Scylla Cloud Cluster on GCP with VPC Peering Enabled
On the GCP Details page, fill in the fields as follows:
GCP Project ID - enter your GCP Project ID. If you don’t know your ID, the instructions on how to locate it are here.
VPC Network Name - Enter the network you would like to use under the same project
VPC Network - enter the network block of your VPC in CIDR format. This allows us to correctly route to your VPC. The IP must not intersect with the IP/CIDR you’ve set on Cluster creation (default is: 172.31.0.0/16). If there are multiple CIDR blocks in your VPC, list them all separated by commas. From the GCP Network - identify the network IP/CIDR you would like to use under the same Project
Click Submit VPC Peering Request.
This procedure requires you to access the VPC Peering console on GCP and complete the Peering setup. You have 2 options to configure VPC Peering:
Access the GCP VPC network Peering console
Remember to use the same Project ID which you entered in Configure the VPC Details on Scylla Cloud.
Fill in the remaining required fields:
Name - your VPC Peering name
Your VPC network - choose your GCP Network from the drop-down menu
Peered VPC network - choose in another project.
Project ID - enter the Scylla Cloud GCP Project name: <Scylla_Project_Name>
VPC Network name - enter Scylla Cloud VPC network name <Scylla_Cluster_VPC_ID>
Leave all other settings as they are.
Do not use this method if you already completed the Manual Setup with a GCP Console.
An alternative way to Configure VPC Routing is to run the following GCP CLI with your values.
gcloud compute networks peerings create [peering name] --network [your network name] --peer-network [URI} --project=[your project name] --peer-project [Project_name]
This procedure gives you some ways to test the VPC Peering between Scylla Cloud and your GCP instances
Test connectivity of your VPC: from a VM instance within the VPC network, try to connect to port 9042 with nc, telnet, or cqlsh (the required credentials are on the Cluster page):
For example, with
nc -z 198.51.100.0 9042 && echo ok!
telnet command, telnet to the IP address