This page describes newly identified limitations in the CockroachDB v20.2.8 release as well as unresolved limitations identified in earlier releases.
New limitations
Partitioning on ENUM
values
Partitions cannot be created on columns of type ENUM
.
Multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE
CockroachDB does not currently support multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE
, and will return an error if there are multiple unique or exclusion constraints matching the ON CONFLICT DO UPDATE
specification.
IMPORT
into a table with partial indexes
CockroachDB does not currently support IMPORT
s into tables with partial indexes.
To work around this limitation:
- Drop any partial indexes defined on the table.
- Perform the
IMPORT
. - Recreate the partial indexes.
If you are performing an IMPORT
of a PGDUMP
with partial indexes:
- Drop the partial indexes on the PostgreSQL server.
- Recreate the
PGDUMP
. IMPORT
thePGDUMP
.- Add partial indexes on the CockroachDB server.
Historical reads on restored objects
An object's historical data is not preserved upon RESTORE
. This means that if an AS OF SYSTEM TIME
query is issued on a restored object, the query will fail or the response will be incorrect because there is no historical data to query.
Spatial support limitations
CockroachDB supports efficiently storing and querying spatial data, with the following limitations:
Not all PostGIS spatial functions are supported.
The
AddGeometryColumn
spatial function only allows constant arguments.The
AddGeometryColumn
spatial function only allows thetrue
value for itsuse_typmod
parameter.CockroachDB does not support the
@
operator. Instead of using@
in spatial expressions, we recommend using the inverse, with~
. For example, instead ofa @ b
, useb ~ a
.CockroachDB does not yet support
INSERT
s into thespatial_ref_sys
table. This limitation also blocks theogr2ogr -f PostgreSQL
file conversion command.CockroachDB does not yet support
DECLARE CURSOR
, which prevents theogr2ogr
conversion tool from exporting from CockroachDB to certain formats and prevents QGIS from working with CockroachDB. To work around this limitation, export data first to CSV or GeoJSON format.CockroachDB does not yet support storing spatial objects of more than two dimensions.
CockroachDB does not yet support Triangle or
TIN
spatial shapes.CockroachDB does not yet support Curve, MultiCurve, or CircularString spatial shapes.
CockroachDB does not yet support k-nearest neighbors.
Unresolved limitations
Collation names that include upper-case or hyphens may cause errors
Using a collation name with upper-case letters or hyphens may result in errors.
For example, the following SQL will result in an error:
> CREATE TABLE nocase_strings (s STRING COLLATE "en-US-u-ks-level2");
> INSERT INTO nocase_strings VALUES ('Aaa' COLLATE "en-US-u-ks-level2"), ('Bbb' COLLATE "en-US-u-ks-level2");
> SELECT s FROM nocase_strings WHERE s = ('bbb' COLLATE "en-US-u-ks-level2");
ERROR: internal error: "$0" = 'bbb' COLLATE en_us_u_ks_level2: unsupported comparison operator: <collatedstring{en-US-u-ks-level2}> = <collatedstring{en_us_u_ks_level2}>
As a workaround, only use collation names that have lower-case letters and underscores.
Subqueries in SET
statements
It is not currently possible to use a subquery in a SET
or SET CLUSTER SETTING
statement. For example:
> SET application_name = (SELECT 'a' || 'b');
ERROR: invalid value for parameter "application_name": "(SELECT 'a' || 'b')"
SQLSTATE: 22023
DETAIL: subqueries are not allowed in SET
Enterprise BACKUP
does not capture database/table/column comments
The COMMENT ON
statement associates comments to databases, tables, or columns. However, the internal table (system.comments
) in which these comments are stored is not captured by a BACKUP
of a table or database.
As a workaround, take a cluster backup instead, as the system.comments
table is included in cluster backups.
CHECK
constraint validation for INSERT ON CONFLICT
differs from PostgreSQL
CockroachDB validates CHECK
constraints on the results of INSERT ON CONFLICT
statements, preventing new or changed rows from violating the constraint. Unlike PostgreSQL, CockroachDB does not also validate CHECK
constraints on the input rows of INSERT ON CONFLICT
statements.
If this difference matters to your client, you can INSERT ON CONFLICT
from a SELECT
statement and check the inserted value as part of the SELECT
. For example, instead of defining CHECK (x > 0)
on t.x
and using INSERT INTO t(x) VALUES (3) ON CONFLICT (x) DO UPDATE SET x = excluded.x
, you could do the following:
> INSERT INTO t (x)
SELECT if (x <= 0, crdb_internal.force_error('23514', 'check constraint violated'), x)
FROM (values (3)) AS v(x)
ON CONFLICT (x)
DO UPDATE SET x = excluded.x;
An x
value less than 1
would result in the following error:
pq: check constraint violated
Cold starts of large clusters may require manual intervention
If a cluster contains a large amount of data (>500GiB / node), and all nodes are stopped and then started at the same time, clusters can enter a state where they're unable to startup without manual intervention. In this state, logs fill up rapidly with messages like refusing gossip from node x; forwarding to node y
, and data and metrics may become inaccessible.
To exit this state, you should:
- Stop all nodes.
- Set the following environment variables:
COCKROACH_SCAN_INTERVAL=60m
, andCOCKROACH_SCAN_MIN_IDLE_TIME=1s
. - Restart the cluster.
Once restarted, monitor the Replica Quiescence graph on the Replication Dashboard. When >90% of the replicas have become quiescent, conduct a rolling restart and remove the environment variables. Make sure that under-replicated ranges do not increase between restarts.
Requests to restarted node in need of snapshots may hang
When a node is offline, the Raft logs for the ranges on the node get truncated. When the node comes back online, it therefore often needs Raft snapshots to get many of its ranges back up-to-date. While in this state, requests to a range will hang until its snapshot has been applied, which can take a long time.
To work around this limitation, you can adjust the kv.snapshot_recovery.max_rate
cluster setting to temporarily relax the throughput rate limiting applied to snapshots. For example, changing the rate limiting from the default 8 MB/s, at which 1 GB of snapshots takes at least 2 minutes, to 64 MB/s can result in an 8x speedup in snapshot transfers and, therefore, a much shorter interruption of requests to an impacted node:
> SET CLUSTER SETTING kv.snapshot_recovery.max_rate = '64mb';
Before increasing this value, however, verify that you will not end up saturating your network interfaces, and once the problem has resolved, be sure to reset to the original value.
Location-based time zone names
Certain features of CockroachDB require time zone data, for example, to support using location-based names as time zone identifiers. When starting a CockroachDB node on a machine missing time zone data, the node will not start.
To resolve this issue on Linux, install the tzdata
library (sometimes called tz
or zoneinfo
).
To resolve this issue on Windows, download Go's official zoneinfo.zip and set the ZONEINFO
environment variable to point to the zip file. For step-by-step guidance on setting environment variables on Windows, see this external article.
Make sure to do this across all nodes in the cluster and to keep this time zone data up-to-date.
Change data capture
Change data capture (CDC) provides efficient, distributed, row-level change feeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing.
- Changefeeds only work on tables with a single column family (which is the default for new tables).
- Changefeeds do not share internal buffers, so each running changefeed will increase total memory usage. To watch multiple tables, we recommend creating a changefeed with a comma-separated list of tables.
- Many DDL queries (including
TRUNCATE
andDROP TABLE
) will cause errors on a changefeed watching the affected tables. You will need to start a new changefeed. - Changefeeds cannot be backed up or restored.
- Partial or intermittent sink unavailability may impact changefeed stability; however, ordering guarantees will still hold for as long as a changefeed remains active.
- Changefeeds cannot be altered. To alter, cancel the changefeed and create a new one with updated settings from where it left off.
- Additional target options will be added, including partitions and ranges of primary key rows.
- Changefeeds do not pick up data ingested with the
IMPORT INTO
statement. - Using a cloud storage sink only works with
JSON
and emits newline-delimited json files.
DB Console may become inaccessible for secure clusters
Accessing the DB Console for a secure cluster now requires login information (i.e., username and password). This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the DB Console.
AS OF SYSTEM TIME
in SELECT
statements
AS OF SYSTEM TIME
can only be used in a top-level SELECT
statement. That is, we do not support statements like INSERT INTO t SELECT * FROM t2 AS OF SYSTEM TIME <time>
or two subselects in the same statement with differing AS OF SYSTEM TIME
arguments.
Large index keys can impair performance
The use of tables with very large primary or secondary index keys (>32KB) can result in excessive memory usage. Specifically, if the primary or secondary index key is larger than 32KB the default indexing scheme for storage engine SSTables breaks down and causes the index to be excessively large. The index is pinned in memory by default for performance.
To work around this issue, we recommend limiting the size of primary and secondary keys to 4KB, which you must account for manually. Note that most columns are 8B (exceptions being STRING
and JSON
), which still allows for very complex key structures.
DB Console: Statements page latency reports
The Statements page does not correctly report "mean latency" or "latency by phase" for statements that result in schema changes or other background jobs.
Using LIKE...ESCAPE
in WHERE
and HAVING
constraints
CockroachDB tries to optimize most comparisons operators in WHERE
and HAVING
clauses into constraints on SQL indexes by only accessing selected rows. This is done for LIKE
clauses when a common prefix for all selected rows can be determined in the search pattern (e.g., ... LIKE 'Joe%'
). However, this optimization is not yet available if the ESCAPE
keyword is also used.
Using SQLAlchemy with CockroachDB
Users of the SQLAlchemy adapter provided by Cockroach Labs must upgrade the adapter to the latest release before upgrading to CockroachDB v20.2.
DB Console: CPU percentage calculation
For multi-core systems, the user CPU percent can be greater than 100%. Full utilization of one core is considered as 100% CPU usage. If you have n cores, then the user CPU percent can range from 0% (indicating an idle system) to (n*100)% (indicating full utilization).
DB Console: CPU count in containerized environments
When CockroachDB is run in a containerized environment (e.g., Kubernetes), the DB Console does not detect CPU limits applied to a container. Instead, the UI displays the actual number of CPUs provisioned on a VM.
TRUNCATE
does not behave like DELETE
TRUNCATE
is not a DML statement, but instead works as a DDL statement. Its limitations are the same as other DDL statements, which are outlined in Online Schema Changes: Limitations
Ordering tables by JSONB
/JSON
-typed columns
CockroachDB does not currently key-encode JSON values. As a result, tables cannot be ordered by JSONB
/JSON
-typed columns.
Current sequence value not checked when updating min/max value
Altering the minimum or maximum value of a series does not check the current value of a series. This means that it is possible to silently set the maximum to a value less than, or a minimum value greater than, the current value.
Using default_int_size
session variable in batch of statements
When setting the default_int_size
session variable in a batch of statements such as SET default_int_size='int4'; SELECT 1::IN
, the default_int_size
variable will not take affect until the next statement. This happens because statement parsing takes place asynchronously from statement execution.
As a workaround, set default_int_size
via your database driver, or ensure that SET default_int_size
is in its own statement.
COPY FROM
statements are not supported in the CockroachDB SQL shell
The built-in SQL shell provided with CockroachDB (cockroach sql
/ cockroach demo
) does not currently support importing data with the COPY
statement.
To load data into CockroachDB, we recommend that you use an IMPORT
. If you must use a COPY
statement, you can issue the statement from the psql
client command provided with PostgreSQL, or from another third-party client.
COPY
syntax not supported by CockroachDB
CockroachDB does not yet support the following COPY
syntax:
COPY ... TO
. To copy data from a CockroachDB cluster to a file, use anEXPORT
statement.COPY ... FROM CSV
COPY ... FROM STDIN
with a delimiter other than the default tab delimiter.COPY ... FROM ... WHERE <expr>
Import with a high amount of disk contention
IMPORT
can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the kv.bulk_io_write.max_rate
cluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:
> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
Placeholders in PARTITION BY
When defining a table partition, either during table creation or table alteration, it is not possible to use placeholders in the PARTITION BY
clause.
Adding a column with sequence-based DEFAULT
values
It is currently not possible to add a column to a table when the column uses a sequence as the DEFAULT
value, for example:
> CREATE TABLE t (x INT);
> INSERT INTO t(x) VALUES (1), (2), (3);
> CREATE SEQUENCE s;
> ALTER TABLE t ADD COLUMN y INT DEFAULT nextval('s');
ERROR: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context
SQLSTATE: 0A000
Available capacity metric in the DB Console
If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is not recommended in production), you must explicitly set the store size per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.
Schema changes within transactions
Within a single transaction:
- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, see examples of unsupported statements.
- As of version v2.1, you can run schema changes inside the same transaction as a
CREATE TABLE
statement. For more information, see this example. - A
CREATE TABLE
statement containingFOREIGN KEY
orINTERLEAVE
clauses cannot be followed by statements that reference the new table. - Database, schema, table, and user-defined type names cannot be reused. For example, you cannot drop a table named
a
and then create (or rename) a different table with the namea
. Similarly, you cannot rename a database nameda
tob
and then create (or rename) a different database with the namea
. As a workaround, splitRENAME TO
,DROP
, andCREATE
statements that reuse object names into separate transactions. - Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed.
- As of v19.1, some schema changes can be used in combination in a single
ALTER TABLE
statement. For a list of commands that can be combined, seeALTER TABLE
. For a demonstration, see Add and rename columns atomically.
If a schema change within a transaction fails, manual intervention may be needed to determine which has failed. After determining which schema change(s) failed, you can then retry the schema changes.
Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed
Schema change DDL statements that run inside a multi-statement transaction with non-DDL statements can fail at COMMIT
time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.
If such a failure occurs, CockroachDB will emit a new CockroachDB-specific error code, XXA00
, and the following error message:
transaction committed but schema change aborted with error: <description of error>
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
This limitation exists in versions of CockroachDB prior to 19.2. In these older versions, CockroachDB returned the Postgres error code 40003
, "statement completion unknown"
.
If you must execute schema change DDL statements inside a multi-statement transaction, we strongly recommend checking for this error code and handling it appropriately every time you execute such transactions.
This error will occur in various scenarios, including but not limited to:
- Creating a unique index fails because values aren't unique.
- The evaluation of a computed value fails.
- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.
To see an example of this error, start by creating the following table.
CREATE TABLE T(x INT);
INSERT INTO T(x) VALUES (1), (2), (3);
Then, enter the following multi-statement transaction, which will trigger the error.
BEGIN;
ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
INSERT INTO T(x) VALUES (3);
COMMIT;
pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
In this example, the INSERT
statement committed, but the ALTER TABLE
statement adding a UNIQUE
constraint failed. We can verify this by looking at the data in table t
and seeing that the additional non-unique value 3
was successfully inserted.
SELECT * FROM t;
x
+---+
1
2
3
3
(4 rows)
Schema changes between executions of prepared statements
When the schema of a table targeted by a prepared statement changes before the prepared statement is executed, CockroachDB allows the prepared statement to return results based on the changed table schema, for example:
> CREATE TABLE users (id INT PRIMARY KEY);
> PREPARE prep1 AS SELECT * FROM users;
> ALTER TABLE users ADD COLUMN name STRING;
> INSERT INTO users VALUES (1, 'Max Roach');
> EXECUTE prep1;
id | name
-----+------------
1 | Max Roach
(1 row)
It's therefore recommended to not use SELECT *
in queries that will be repeated, via prepared statements or otherwise.
Also, a prepared INSERT
, UPSERT
, or DELETE
statement acts inconsistently when the schema of the table being written to is changed before the prepared statement is executed:
- If the number of columns has increased, the prepared statement returns an error but nonetheless writes the data.
- If the number of columns remains the same but the types have changed, the prepared statement writes the data and does not return an error.
INSERT ON CONFLICT
vs. UPSERT
When inserting/updating all columns of a table, and the table has no secondary indexes, we recommend using an UPSERT
statement instead of the equivalent INSERT ON CONFLICT
statement. Whereas INSERT ON CONFLICT
always performs a read to determine the necessary writes, the UPSERT
statement writes without reading, making it faster.
This issue is particularly relevant when using a simple SQL table of two columns to simulate direct KV access. In this case, be sure to use the UPSERT
statement.
Using \|
to perform a large input in the SQL shell
In the built-in SQL shell, using the \|
operator to perform a large number of inputs from a file can cause the server to close the connection. This is because \|
sends the entire file as a single query to the server, which can exceed the upper bound on the size of a packet the server can accept from any client (16MB).
As a workaround, execute the file from the command line with cat data.sql | cockroach sql
instead of from within the interactive shell.
New values generated by DEFAULT
expressions during ALTER TABLE ADD COLUMN
When executing an ALTER TABLE ADD COLUMN
statement with a DEFAULT
expression, new values generated:
- use the default search path regardless of the search path configured in the current session via
SET SEARCH_PATH
. - use the UTC time zone regardless of the time zone configured in the current session via
SET TIME ZONE
. - have no default database regardless of the default database configured in the current session via
SET DATABASE
, so you must specify the database of any tables they reference. - use the transaction timestamp for the
statement_timestamp()
function regardless of the time at which theALTER
statement was issued.
Load-based lease rebalancing in uneven latency deployments
When nodes are started with the --locality
flag, CockroachDB attempts to place the replica lease holder (the replica that client requests are forwarded to) on the node closest to the source of the request. This means as client requests move geographically, so too does the replica lease holder.
However, you might see increased latency caused by a consistently high rate of lease transfers between datacenters in the following case:
- Your cluster runs in datacenters which are very different distances away from each other.
- Each node was started with a single tier of
--locality
, e.g.,--locality=datacenter=a
. - Most client requests get sent to a single datacenter because that's where all your application traffic is.
To detect if this is happening, open the DB Console, select the Queues dashboard, hover over the Replication Queue graph, and check the Leases Transferred / second data point. If the value is consistently larger than 0, you should consider stopping and restarting each node with additional tiers of locality to improve request latency.
For example, let's say that latency is 10ms from nodes in datacenter A to nodes in datacenter B but is 100ms from nodes in datacenter A to nodes in datacenter C. To ensure A's and B's relative proximity is factored into lease holder rebalancing, you could restart the nodes in datacenter A and B with a common region, --locality=region=foo,datacenter=a
and --locality=region=foo,datacenter=b
, while restarting nodes in datacenter C with a different region, --locality=region=bar,datacenter=c
.
Overload resolution for collated strings
Many string operations are not properly overloaded for collated strings, for example:
> SELECT 'string1' || 'string2';
?column?
------------------
string1string2
(1 row)
> SELECT ('string1' collate en) || ('string2' collate en);
pq: unsupported binary operator: <collatedstring{en}> || <collatedstring{en}>
Max size of a single column family
When creating or updating a row, if the combined size of all values in a single column family exceeds the max range size (512 MiB by default) for the table, the operation may fail, or cluster performance may suffer.
As a workaround, you can either manually split a table's columns into multiple column families, or you can create a table-specific zone configuration with an increased max range size.
Simultaneous client connections and running queries on a single node
When a node has both a high number of client connections and running queries, the node may crash due to memory exhaustion. This is due to CockroachDB not accurately limiting the number of clients and queries based on the amount of available RAM on the node.
To prevent memory exhaustion, monitor each node's memory usage and ensure there is some margin between maximum CockroachDB memory usage and available system RAM. For more details about memory usage in CockroachDB, see this blog post.
Query planning for OR
expressions
Given a query like SELECT * FROM foo WHERE a > 1 OR b > 2
, even if there are appropriate indexes to satisfy both a > 1
and b > 2
, the query planner performs a full table or index scan because it cannot use both conditions at once.
Privileges for DELETE
and UPDATE
Every DELETE
or UPDATE
statement constructs a SELECT
statement, even when no WHERE
clause is involved. As a result, the user executing DELETE
or UPDATE
requires both the DELETE
and SELECT
or UPDATE
and SELECT
privileges on the table.
Correlated common table expressions
CockroachDB does not support correlated common table expressions. This means that a CTE cannot refer to a variable defined outside the scope of that CTE.
For example, the following query returns an error:
> SELECT * FROM users
WHERE id =
(WITH rides_home AS
(SELECT revenue FROM rides
WHERE end_address = address)
SELECT rider_id FROM rides_home);
ERROR: CTEs may not be correlated
SQLSTATE: 0A000
This query returns an error because the WITH rides_home
clause references a column (address
) returned by the SELECT
statement at the top level of the query, outside the rides_home
CTE definition.
ROLLBACK TO SAVEPOINT
in high-priority transactions containing DDL
Transactions with priority HIGH
that contain DDL and ROLLBACK TO SAVEPOINT
are not supported, as they could result in a deadlock. For example:
> BEGIN PRIORITY HIGH; SAVEPOINT s; CREATE TABLE t(x INT); ROLLBACK TO SAVEPOINT s;
ERROR: unimplemented: cannot use ROLLBACK TO SAVEPOINT in a HIGH PRIORITY transaction containing DDL
SQLSTATE: 0A000
HINT: You have attempted to use a feature that is not yet implemented.
See: https://github.com/cockroachdb/cockroach/issues/46414
Column name from an outer column inside a subquery differs from PostgreSQL
CockroachDB returns the column name from an outer column inside a subquery as ?column?
, unlike PostgreSQL. For example:
> SELECT (SELECT t.*) FROM (VALUES (1)) t(x);
CockroachDB:
?column?
------------
1
PostgreSQL:
x
---
1
Concurrent SQL shells overwrite each other's history
The built-in SQL shell stores its command history in a single file by default (.cockroachsql_history
). When running multiple instances of the SQL shell on the same machine. Therefore, each shell's command history can get overwritten in unexpected ways.
As a workaround, set the COCKROACH_SQL_CLI_HISTORY
environment variable to different values for the two different shells, for example:
$ export COCKROACH_SQL_CLI_HISTORY=.cockroachsql_history_shell_1
$ export COCKROACH_SQL_CLI_HISTORY=.cockroachsql_history_shell_2
Passwords with special characters cannot be passed in connection parameter
CockroachDB does not allow passwords with special characters to be passed as a connection parameter to cockroach
commands.
CockroachDB does not test for all connection failure scenarios
CockroachDB servers rely on the network to report when a TCP connection fails. In most scenarios when a connection fails, the network immediately reports a connection failure, resulting in a Connection refused
error.
However, if there is no host at the target IP address, or if a firewall rule blocks traffic to the target address and port, a TCP handshake can linger while the client network stack waits for a TCP packet in response to network requests. To work around this kind of scenario, we recommend the following:
- When migrating a node to a new machine, keep the server listening at the previous IP address until the cluster has completed the migration.
- Configure any active network firewalls to allow node-to-node traffic.
- Verify that orchestration tools (e.g., Kubernetes) are configured to use the correct network connection information.
Some column-dropping schema changes do not roll back properly
Some schema changes that drop columns cannot be rolled back properly.
In some cases, the rollback will succeed, but the column data might be partially or totally missing, or stale due to the asynchronous nature of the schema change.
In other cases, the rollback will fail in such a way that will never be cleaned up properly, leaving the table descriptor in a state where no other schema changes can be run successfully.
To reduce the chance that a column drop will roll back incorrectly:
Perform column drops in transactions separate from other schema changes. This ensures that other schema change failures won't cause the column drop to be rolled back.
Drop all constraints (including unique indexes) on the column in a separate transaction, before dropping the column.
Drop any default values or computed expressions on a column before attempting to drop the column. This prevents conflicts between constraints and default/computed values during a column drop rollback.
If you think a rollback of a column-dropping schema change has occurred, check the jobs table. Schema changes with an error prefaced by cannot be reverted, manual cleanup may be required
might require manual intervention.
Disk-spilling on joins with JSON
columns
If the execution of a join query exceeds the limit set for memory-buffering operations (i.e., the value set for the sql.distsql.temp_storage.workmem
cluster setting), CockroachDB will spill the intermediate results of computation to disk. If the join operation spills to disk, and at least one of the equality columns is of type JSON
, CockroachDB returns the error unable to encode table key: *tree.DJSON
. If the memory limit is not reached, then the query will be processed without error.
Disk-spilling not supported for unordered aggregation operations
Unordered aggregation operations do not support disk spilling, and are limited by the --max-sql-memory
setting. If unordered aggregation operations exceed the amount of memory available to the SQL layer, CockroachDB will throw an error, and in some circumstances could crash.
Setting --max-sql-memory
too high could result in performance problems due to increased memory consumption.
See the GitHub tracking issue for details.
Inverted indexes cannot be partitioned
CockroachDB does not support partitioning inverted indexes, including spatial indexes.