-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Description
Describe the bug
The COPY FROM command produces the following error on a csv file created with COPY TO.
<stdin>:1:Failed to import 30 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
To Reproduce
Steps to reproduce the behavior:
-
Install
cqlsh-expansion
as described here. https://docs.aws.amazon.com/keyspaces/latest/devguide/programmatic.cqlsh.html -
Dump data to csv
cqlsh-expansion cassandra.us-west-2.amazonaws.com 9142 --ssl -e "COPY keyspace1.table1 TO './dump.csv' WITH HEADER='true';"
- Create new keyspace/table, matching the source table
CREATE KEYSPACE IF NOT EXISTS "keyspace2"
WITH REPLICATION = {'class':'SingleRegionStrategy'};
CREATE TABLE IF NOT EXISTS keyspace2.table2 (
col1 text,
col2 text,
col3 text,
created_at timestamp,
my_data blob,
PRIMARY KEY (col1, col2, col3)
) WITH CLUSTERING ORDER BY (col2 ASC, col3 ASC)
AND bloom_filter_fp_chance = 0.01
AND comment = ''
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 7776000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 3600000
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
- Import data
cqlsh-expansion cassandra.us-west-2.amazonaws.com 9142 --ssl -e "CONSISTENCY LOCAL_QUORUM; COPY keyspace2.table2 FROM './dump.csv' WITH HEADER='true';"
- Observe errors
Consistency level set to LOCAL_QUORUM.
cqlsh current consistency level is LOCAL_QUORUM.
Reading options from /home/ubuntu/.cassandra/cqlshrc:[copy]: {'numprocesses': '16', 'maxattempts': '1000'}
Reading options from /home/ubuntu/.cassandra/cqlshrc:[copy-from]: {'ingestrate': '1500', 'maxparseerrors': '1000', 'maxinserterrors': '-1', 'maxbatchsize': '10', 'minbatchsize': '1', 'chunksize': '30'}
Reading options from the command line: {'header': 'true'}
Using 16 child processes
Starting copy of keyspace2.table2 with columns [col1, col2, col3, created_at, my_data].
<stdin>:1:Failed to import 30 rows: Error - field larger than field limit (999999), given up after 1 attempts
<stdin>:1:Failed to import 30 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
<stdin>:1:Failed to import 30 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
<stdin>:1:Failed to import 30 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
<stdin>:1:Failed to import 30 rows: Error - field larger than field limit (999999), given up after 1 attempts
<stdin>:1:Failed to import 30 rows: Error - field larger than field limit (999999), given up after 1 attempts
<stdin>:1:Failed to import 30 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
<stdin>:1:Failed to import 30 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts
...
Processed: 14105 rows; Rate: 212 rows/s; Avg. rate: 185 rows/s
0 rows imported from 1 files in 0 day, 0 hour, 1 minutes, and 16.140 seconds (0 skipped).
Expected behavior
I expect the import to complete successfully without errors.
Screenshots
n/a
Environment (please complete the following information):
- Host OS: Ubuntu 22.04
- AWS Keyspaces
- cqlsh-expansion: 6.1.0
Additional context
I'm just trying to do a simple export/import.
(P.S. Appologies if this is the wrong repo to report 'cqlsh-expansion' bugs.)
jlewis-spotnana
Metadata
Metadata
Assignees
Labels
No labels