November 12, 2019
With the release of CockroachDB v19.2, we’ve made a variety of performance, resiliency, and usability improvements. Check out a comprehensive summary of the most significant user-facing changes and then upgrade to CockroachDB v19.2. You can also read more about these changes in the v19.2 blog post or watch our 19.2 release demo and overview.
A denial-of-service (DoS) vulnerability is present in CockroachDB v19.2.0 - v19.2.11 due to a bug in protobuf. This is resolved in CockroachDB v19.2.12 and later releases. When upgrading is not an option, users should audit their network configuration to verify that the CockroachDB HTTP port is not available to untrusted clients. We recommend blocking the HTTP port behind a firewall.
For more information, including other affected versions, see Technical Advisory 58932.
Cockroach Labs has discovered a bug relating to incremental backups, for CockroachDB v19.2.0 - v19.2.12. If a backup coincides with an in-progress index creation (backfill), RESTORE
, or IMPORT
, it is possible that a subsequent incremental backup will not include all of the indexed, restored or imported data.
Users are advised to upgrade to v20.1.15 or v20.2.8 or later, which includes resolutions.
For more information, including other affected versions, see Technical Advisory 63162.
Get future release notes emailed to you:
In addition to v19.2, we're launching Cockroach University, a free, online learning tool for developers and architects who want to gain a fundamental understanding of distributed databases and deep knowledge of CockroachDB’s functionality and architecture. The first course, “Getting Started with CockroachDB,” uses videos, exercises, and quizzes to teach the key characteristics of a distributed SQL database, walking you through how to spin up a CockroachDB instance, run basic queries, and test out CockroachDB’s unique capabilities. For additional information or to register, visit university.cockroachlabs.com.
Downloads
Docker image
$ docker pull cockroachdb/cockroach:v19.2.0
Summary
This section summarizes the most significant user-facing changes in v19.2.0. For a complete list of features and changes, including bug fixes and performance improvements, see the release notes for previous testing releases.
- CockroachCloud
- Core features
- Enterprise features
- Backward-incompatible changes
- Known limitations
- Documentation
In addition to the features below, SQL support for timestamp objects with a precision value, present in earlier testing releases of v19.2, has been reverted.
CockroachCloud
CockroachCloud will soon offer a completely self-service account creation and management workflow as well as single-region cluster creation in AWS and GCP using a credit card. Sign up for the beta program here.
Core features
These features are freely available in the core version and do not require an enterprise license.
The core version of CockroachDB v19.2 uses the Business Source License instead of the Apache 2.0 License. For more information on why we changed our licensing approach and some practical questions and implications, see our blog post and Licensing FAQs. The full details of the license can be found on Github.
Feature | Description |
---|---|
Parallel Commits | CockroachDB's new optimized atomic commit protocol cuts the commit latency of a transaction in half, from two rounds of consensus down to one. Combined with transaction pipelining, parallel commits brings the latency incurred by common OLTP transactions to near the theoretical minimum: the sum of all read latencies plus one round of consensus latency. This especially lowers latency for transactions involving secondary indexes both in multi-region and single-region deployments. |
Vectorized Query Execution | CockroachDB now supports column-oriented ("vectorized") query execution for operations that are guaranteed to execute in memory on tables with supported data types. |
Bulk Import Improvements | CockroachDB now supports efficiently loading large amounts of CSV data into existing tables using the new IMPORT INTO statement.Also, for the previously existing IMPORT statement for bulk import of new tables, using as many files as you have nodes and ordering data in imported files by primary key now increases parallelism and performance. |
Data Replication Reports | Several new and updated system tables can help you query the status of your cluster's data replication, data placement, and replication zone constraint conformance. For example, you can use these reports to see what data is under-replicated or unavailable, to show which of your localities (if any) are vulnerable to data unavailability in more common failure scenarios, and to see if any of your cluster's data placement constraints are being violated. |
Multi-Use CTEs | Common Table Expressions can now be re-used multiple times in the same query via a WITH operator. |
Comprehensive Cost-Based Optimizer | All SQL queries now leverage the cost-based optimizer to choose the lowest cost plans, including DDL statements and window functions that previously leveraged the legacy heuristic planner. |
Ordering Aggregations | Non-commutative aggregate functions are sensitive to the order in which rows are processed. This order can now be controlled with an ORDER BY clause within the function argument list. |
Index Hints for UPDATE & DELETE |
It's now possible to force the use of a specific index for deleting rows and updating rows. |
Streaming with JDBC | CockroachDB now provides limited support for Postgres wire-protocol cursors for implicit and explicit transactions executed to completion. The Java JDBC driver can use this protocol to stream queries with large result sets, providing much faster performance than result paginating with LIMIT ... OFFSET . |
Transaction Latency Graphs | The SQL Dashboard in the Admin UI now provides timeseries graphs of p90 and p99 transaction latencies to complement the per-statement metrics on the Statements page. |
CREATE TABLE AS Improvements |
When using the CREATE TABLE AS statement to create a new table from a selection query, you can now specify the primary key and define column families for the new table. The latter is especially helpful for modifying the primary key of tables for geo-partitioning. Also, CREATE TABLE AS is now usable for larger tables than before. |
Adding NOT NULL Constraint |
It's now possible to add the NOT NULL constraint to an existing column, in addition to the previous ability to set this constraint during table creation. |
Local Testing Improvements | CockroachDB v19.2 includes several usability improvements to running CockroachDB locally for SQL testing and app development. First, the cockroach demo command, which starts a temporary, in-memory CockroachDB cluster with immediate access to an interactive SQL shell, now comes pre-loaded with the movr dataset and a temporary enterprise license that expires after an hour. This means users can try out enterprise features quickly before requesting a license for use in a persistent cluster.Also, the --nodes flag now lets you specify the number of in-memory nodes to create for the demo, with the new --demo-locality flag setting locality information for each node. Alternatively, the --geo-partitioned-replicas flag lets you start a 9-node demo cluster with the Geo-Partitioned Replicas data topology applied automatically to the movr database.Finally, the --with-load command lets you run movr as an active workload against the in-memory cluster. |
Cluster Startup Improvements | There are now distinct methods for starting single-node and multi-node clusters. For multi-node clusters, start each node with the cockroach start command including a --join flag and then run the one-time cockroach init command. Using cockroach start without the --join flag is now deprecated and will be removed in a future release.For single-node clusters, use the new cockroach start-single-node command to start just a single node with replication disabled and without the need for any extra commands. |
Built-in Workload Improvements | Cockroach Labs' fictional vehicle-sharing app, MovR, is now available as a sample workload using the cockroach workload CLI command. Also, the --display-every and --display-format flags now make it possible to control the frequency and format for printing per-operation statistics for several workloads. |
Data Type Improvements | DATE values in CockroachDB are now fully PostgreSQL-compatible, including support for special values (e.g., +/- infinity ). Existing dates outside of the PostgreSQL date range (4714-11-24 BC to 5874897-12-31 ) are converted to +/- infinity dates.Also, you can now use the timezone() and AT TIME ZONE functions to convert a TIMESTAMPTZ into a TIMESTAMP at a specified timezone, or vice versa. |
Interactive SQL Shell Commands | Within the interactive SQL shell, the \l , \dt , \d , and \d <table> commands now list all databases in the CockroachDB cluster, show the tables of the current schema in the current database, list the users for all databases, and show details about columns in the specified table, respectively. |
Removing Manual Splits | The new UNSPLIT AT statement removes a split enforcement on a key-value range, at a specified row in a table or index, allowing CockroachDB to merge the range as needed. |
Locking and FOR UPDATE |
Because CockroachDB only supports SERIALIZABLE isolation, locking is not required to order concurrent transactions relative to each other. However, the FOR UPDATE locking clause is now supported in selection queries as a no-op for database migration compatibility only. |
Showing Row Location | The new SHOW RANGE ... FOR ROW statement shows information about a range for a particular row of data. This information is useful for verifying how SQL data maps to underlying ranges, and where the replicas for a range are located. However, note that this is an experimental feature, and the interface and output are subject to change. |
Viewing Node Locality | It's now easy to retrieve the localities of nodes for setting zone configuration constraints via the SHOW LOCALITY statement and the cockroach node status command. |
Viewing Complete Jobs | By default, the SHOW JOBS statement lists all of the types of long-running tasks your cluster has performed in the last 12 hours. You can now use the WHEN COMPLETE option to return only after all specified job ID(s) have completed. |
Viewing Comments for Virtual Tables | Using SHOW TABLES ... WITH COMMENTS on virtual tables in the pg_catalog , information_schema , and crdb_internal schemas now returns useful comments, often with links to further documentation. |
Enterprise features
These features require an enterprise license. Register for a 30-day trial license here, or consider testing enterprise features locally using the cockroach demo
CLI command, which starts an in-memory CockroachDB cluster with a temporary enterprise license pre-loaded.
Feature | Description |
---|---|
Backup & Restore Improvements | CockroachDB now supports locality-aware backup and restore such that each node writes to and restores from files in its locality. This can reduce cloud storage data transfer costs by keeping data within cloud regions and help you comply with data domiciling requirements. Also, using SHOW BACKUP with the new SCHEMAS option adds the number of rows and the schema of the backed up table to the output, and using RESTORE with the new skip_missing_views option restores a table from a view without restoring the view's dependencies. |
Geo-Partitioning Improvements | CockroachDB v19.2 includes several usability improvements to geo-partitioning. First, it's now possible to name partitions identically across indexes of a table (e.g., us-west , us-east ) and use a single command to create zone configurations for all partitions of a given name, saving you multiple steps.Next, it's easy to retrieve the localities of nodes for setting zone configuration constraints via the SHOW LOCALITY statement and the cockroach node status command, both of which do not require an enterprise license.There are now also several ways to view the details of partitions and confirm they are in effect, from the outputs of SHOW CREATE TABLE and the table schemas listed on the Databases page in the Admin UI to the output of SHOW PARTITIONS .Finally, it's now much easier to efficiently query partitioned data; when filtering by the column directly following the partitioned prefix in the primary key, the cost-based optimizer creates a query plan that scans each partition in parallel, rather than performing a costly sequential scan of the entire table. Filtering by the partition value itself can further improve performance by limiting the scan to the specific partition(s) that contain the data that you are querying. |
Backward-incompatible changes
Before upgrading to CockroachDB v19.2.0, be sure to review the following backward-incompatible changes and adjust your application as necessary.
The
cockroach zone
CLI command has been removed. Use the equivalentCONFIGURE ZONE
andSHOW ZONE CONFIGURATIONS
SQL statements for managing replication zones instead.The
IMPORT
SQL statement no longer accepts quotes inside unquoted CSV fields.CONFIGURE ZONE
SQL statements now fail if the user does not have sufficient privileges. If the target is asystem
range, thesystem
database, or a table in thesystem
database, the user must have an admin role. For all other databases and tables, the user must have theCREATE
privilege on the target database or table.This change might be backward-incompatible for users running scripted
CONFIGURE ZONE
statements with restricted permissions. To add the necessary permissions, useGRANT
<privileges> orGRANT
<roles> as a user with an admin role. For example, to grant a user the admin role, runGRANT admin TO <user>
. To grant theCREATE
privilege on a database or table, runGRANT CREATE ON [DATABASE | TABLE] <name> TO <user>
.INT
andFLOAT
columns of less than the max width will now be returned as their own type via the binary protocol. For example, anint4
column will be returned in 32 bits over the pgwire binary protocol instead of 64 bits.The
changefeed.min_high_water
metric has been removed from the Prometheus Endpoint. For detecting high changefeed latency, use thechangefeed.max_behind_nanos
metric instead.
Known limitations
For information about new and unresolved limitations in CockroachDB v19.2, with suggested workarounds where applicable, see Known Limitations.
Documentation
Topic | Description |
---|---|
Performance Benchmarking | Added an overview of CockroachDB's performance profiles (scaling, throughput, latency), based on Cockroach Labs' extensive testing using industry-standard benchmarks like TPC-C and Sysbench, as well as detailed instructions for reproducing our TPC-C benchmarking results at different scales. |
Multi-Region Deployment | Updated the tutorial on getting low latency reads and writes in a multi-region cluster to feature two of the most important multi-region data topologies for dramatically reducing the impact of network latency, Geo-Partitioned Replicas and Duplicate Indexes. |
Orchestration with Kubernetes | Expanded the tutorial on Kubernetes single-region deployment to cover running on Amazon's hosted EKS and naming CSR naming requirements for secure deployments. Also updated and expanded the instructions on using Helm. |
Client-Side Transaction Retries | Updated and simplified the client-side transaction logic in the Java, Python, and Go getting started tutorials and code samples. Also added pseudocode to help with the implementation of this logic in other languages as well as instructions for authors of database drivers and ORMs who would like to implement client-side retries in their database driver or ORM for maximum efficiency and ease of use by application developers. |
SQL Tuning with EXPLAIN |
Added a tutorial on how to use EXPLAIN to identify and resolve common SQL performance problems. |
Testing with MovR Dataset | Added an overview of MovR, CockroachDB's fictional vehicle-sharing dataset and application, and updated several SQL pages and examples to use the built-in MovR dataset, for example, Learn CockroachDB SQL. |
Migration from Oracle | Added guidance on migrating from Oracle, including the process of converting schema and exporting data for loading into CockroachDB. |
App Deployment on CockroachCloud | Added a tutorial on running a sample To-Do app in Kubernetes with CockroachCloud as the datastore. The app is written in Python with Flask as the web framework and SQLAlchemy for working with SQL data, and the code is open-source and forkable. |