Migrate from a CockroachDB Serverless to CockroachDB Dedicated Cluster

This page has instructions for migrating data from a CockroachDB Serverless cluster to a CockroachDB Dedicated cluster, by exporting to CSV and using IMPORT. You may want to migrate to CockroachDB Dedicated if you want a single-tenant cluster with no shared resources.

The steps below use sample data from the tpcc workload.

Before you start

These instructions assume you already have the following:

Step 1. Export data to a local CSV file

In CockroachDB Serverless clusters, all external service integrations are disabled. This means that if you want to export data, you need to use cockroach sql --execute to query the data you want to export, and then pipe the data to a local file. For example:

icon/buttons/copy
$ cockroach sql \
--url 'postgres://<username>:<password>@free-tier.<region>.cockroachlabs.cloud:26257?sslmode=verify-full&sslrootcert=<path/to/certs_dir>/cc-ca.crt&options=--cluster=<cluster_name>' \
--execute "SELECT * FROM tpcc.warehouse" --format=csv > /Users/<username>/<path/to/file>/warehouse.csv

By running the example command above, the following data is exported to the local warehouse.csv file:

w_id,w_name,w_street_1,w_street_2,w_city,w_state,w_zip,w_tax,w_ytd
0,8,17,13,11,SF,640911111,0.0806,300000.00

Repeat this step for each table you want to migrate. For example, let's export one more table (district) from the tpcc database:

icon/buttons/copy
$ cockroach sql \
--url 'postgres://<username>:<password>@free-tier.<region>.cockroachlabs.cloud:26257?sslmode=verify-full&sslrootcert=<path/to/certs_dir>/cc-ca.crt&options=--cluster=<cluster_name>' \
--execute "SELECT * FROM tpcc.district" --format=csv > /Users/<username>/<path/to/file>/district.csv

This will create the district.csv file with the following data:

d_id,d_w_id,d_name,d_street_1,d_street_2,d_city,d_state,d_zip,d_tax,d_ytd,d_next_o_id
1,0,9cdLXe0Yh,gLRrwsmd68P2b,ElAgrnp8ueW,NXJpBB0ObpVWo1B,QQ,640911111,0.1692,30000.00,3001
2,0,1fcW8Rsa,CXoEzmssaF9m9cdLXe0Y,hgLRrwsmd68P2bElAgr,np8ueWNXJpBB0ObpVW,VW,902211111,0.1947,30000.00,3001
3,0,6rumMm,p6NHnwiwKdcgp,hy3v1U5yraPx,xELo5B1fcW8RsaCXoEz,QQ,230811111,0.0651,30000.00,3001
4,0,ssaF9m9,cdLXe0YhgLRrws,md68P2bElAgrn,p8ueWNXJpBB0ObpVW,SF,308211111,0.1455,30000.00,3001
5,0,Kdcgphy3,v1U5yraPxxELo,5B1fcW8RsaCXoEzm,ssaF9m9cdLXe0YhgLR,CQ,308211111,0.1195,30000.00,3001
6,0,mssaF9m9cd,LXe0YhgLRrwsmd68P,2bElAgrnp8ue,WNXJpBB0ObpVW,WM,223011111,0.0709,30000.00,3001
7,0,zmssaF,9m9cdLXe0YhgLRrws,md68P2bElA,grnp8ueWNX,OA,264011111,0.1060,30000.00,3001
8,0,8RsaCXoEz,mssaF9m9cdLXe0Yh,gLRrwsmd68P2bElAgrnp,8ueWNXJpBB0ObpVWo,VW,022311111,0.0173,30000.00,3001
9,0,fcW8Rs,aCXoEzmssaF9m9,cdLXe0YhgLRrws,md68P2bElAgrnp8ue,JC,230811111,0.0755,30000.00,3001
10,0,RsaCXoEzm,ssaF9m9cdLXe0YhgLRr,wsmd68P2bE,lAgrnp8ueWNXJpBB0Ob,PV,082911111,0.1779,30000.00,3001

Step 2. Host the files where the CockroachDB Dedicated cluster can access them

After you've exported your CockroachDB Serverless cluster data to your local machine, you now need to upload the files to a storage location where the CockroachDB Dedicated cluster can access them. We recommend using cloud storage or userfile.

In this example, we'll use Amazon S3 to host the two files (warehouse.csv and district.csv) created in Step 1.

Step 3. Import the CSV

Tip:

For best practices for optimizing import performance in CockroachDB, see Import Performance Best Practices.

  1. Create the database you want to import the tables into. For example:

    icon/buttons/copy
    > CREATE DATABASE tpcc;
    
  2. Write an IMPORT statement that matches the schema of the table data you're importing.

    Tip:

    You can use the SHOW CREATE TABLE statement in the CockroachDB Serverless cluster to view the CREATE statement for the table you're migrating.

    Note:

    The column order in your schema must match the column order in the file being imported.

    For example, to import the data from warehouse.csv into a warehouse table, use the following statement:

    icon/buttons/copy
    > IMPORT TABLE tpcc.warehouse (
        w_id INT8 NOT NULL,
        w_name VARCHAR(10) NULL,
        w_street_1 VARCHAR(20) NULL,
        w_street_2 VARCHAR(20) NULL,
        w_city VARCHAR(20) NULL,
        w_state CHAR(2) NULL,
        w_zip CHAR(9) NULL,
        w_tax DECIMAL(4,4) NULL,
        w_ytd DECIMAL(12,2) NULL,
        CONSTRAINT "primary" PRIMARY KEY (w_id ASC),
        FAMILY "primary" (w_id, w_name, w_street_1, w_street_2, w_city, w_state, w_zip, w_tax, w_ytd)
      ) CSV DATA ('s3://<bucket_name>/warehouse.csv?AWS_ACCESS_KEY_ID=<access_key>&AWS_SECRET_ACCESS_KEY=<secret_key>')
      WITH
        skip = '1';
    

    Notice that we used the skip option in the above command. This is because the first line of the CSV file we created in Step 1 is the header row, not actual data to import. For more information about the options available for IMPORT ... CSV, see Import options.

            job_id       |  status   | fraction_completed | rows | index_entries | bytes
    ---------------------+-----------+--------------------+------+---------------+--------
      652283814057476097 | succeeded |                  1 |    1 |             0 |    53
    (1 row)
    
    Note:

    To import data into an existing table, use IMPORT INTO.

  3. Repeat the above for each CSV file you want to import. For example, let's import the second file (district.csv) we created earlier:

    icon/buttons/copy
    > IMPORT TABLE tpcc.district (
        d_id INT8 NOT NULL,
        d_w_id INT8 NOT NULL,
        d_name VARCHAR(10) NULL,
        d_street_1 VARCHAR(20) NULL,
        d_street_2 VARCHAR(20) NULL,
        d_city VARCHAR(20) NULL,
        d_state CHAR(2) NULL,
        d_zip CHAR(9) NULL,
        d_tax DECIMAL(4,4) NULL,
        d_ytd DECIMAL(12,2) NULL,
        d_next_o_id INT8 NULL,
        CONSTRAINT "primary" PRIMARY KEY (d_w_id ASC, d_id ASC),
        FAMILY "primary" (d_id, d_w_id, d_name, d_street_1, d_street_2, d_city, d_state, d_zip, d_tax, d_ytd, d_next_o_id)                     
      ) CSV DATA ('s3://<bucket_name>/district.csv?AWS_ACCESS_KEY_ID=<access_key>&AWS_SECRET_ACCESS_KEY=<secret_key>')
      WITH
        skip = '1';
    
            job_id       |  status   | fraction_completed | rows | index_entries | bytes
    ---------------------+-----------+--------------------+------+---------------+--------
      652285202857820161 | succeeded |                  1 |   10 |             0 |  1017
    (1 row)
    
  4. (Optional) To verify that the two tables were imported, use SHOW TABLES:

    icon/buttons/copy
    > SHOW TABLES FROM tpcc;
    
      schema_name | table_name | type  | owner  | estimated_row_count
    --------------+------------+-------+--------+----------------------
      public      | district   | table | lauren |                  10
      public      | warehouse  | table | lauren |                   1
    (2 rows)
    

Step 4. Add any foreign key relationships

Once all of the tables you want to migrate have been imported into the CockroachDB Dedicated cluster, add the foreign key relationships. To do this, use ALTER TABLE ... ADD CONSTRAINT. For example:

icon/buttons/copy
> ALTER TABLE tpcc.district ADD CONSTRAINT fk_d_w_id_ref_warehouse FOREIGN KEY (d_w_id) REFERENCES tpcc.warehouse(w_id);
ALTER TABLE

See also


Yes No

Yes No