The .import command in SQLite doesn’t actually parse CSV. It treats CSV files as simple, delimited text files, which is why it often trips people up when dealing with actual CSV nuances.
Let’s see it in action. First, create a dummy CSV file named users.csv:
id,name,email
1,"Alice Smith","alice.smith@example.com"
2,"Bob Johnson","bob.j@example.com"
3,"Charlie <charlie@example.com>","charlie@example.com"
Now, let’s try to import this into a new SQLite table named users:
sqlite> CREATE TABLE users (id INTEGER, name TEXT, email TEXT);
sqlite> .mode csv
sqlite> .import users.csv users
Error: misuse of alias or column name
See? It failed. Why? Because the first line of our CSV, id,name,email, was interpreted by .import as data, not headers. It tried to insert id into the id column, name into the name column, and email into the email column. But id isn’t an integer, hence the error.
The .mode csv command tells SQLite how to output data in CSV format, but .import itself has a separate, implicit mode for reading delimited files. It expects a stream of data, and by default, it doesn’t know to skip the header row.
To fix this, you need to tell .import to expect a header row. This isn’t done with .mode but with a special .import option:
sqlite> CREATE TABLE users (id INTEGER, name TEXT, email TEXT);
sqlite> .mode csv
sqlite> .import --skip 1 users.csv users
sqlite> SELECT * FROM users;
1|Alice Smith|alice.smith@example.com
2|Bob Johnson|bob.j@example.com
3|Charlie <charlie@example.com>|charlie@example.com
Notice the --skip 1 argument. This tells .import to ignore the first line of the users.csv file, effectively treating it as a header. Now the data is imported correctly.
But what if your CSV has quoting or embedded commas?
Let’s create products.csv:
product_id,description,price
101,"Gadget, Deluxe","19.99"
102,"Widget (Basic)","5.50"
103,"Super Gizmo, Extra","29.95"
If we try to import this directly with --skip 1:
sqlite> CREATE TABLE products (product_id INTEGER, description TEXT, price REAL);
sqlite> .mode csv
sqlite> .import --skip 1 products.csv products
sqlite> SELECT * FROM products;
101|Gadget|Deluxe
102|Widget (Basic)|5.50
103|Super Gizmo|Extra
This is also wrong! The comma within "Gadget, Deluxe" and "Super Gizmo, Extra" was misinterpreted as a delimiter. The .mode csv setting doesn’t influence how .import reads. It’s crucial to understand that .import has its own internal parsing logic that isn’t fully CSV-compliant by default.
To handle quoted fields correctly, you need to use the .separator command before .import. This tells .import what the delimiter is, and it will respect quoting rules if the delimiter is a comma.
sqlite> CREATE TABLE products (product_id INTEGER, description TEXT, price REAL);
sqlite> .separator ","
sqlite> .import --skip 1 products.csv products
sqlite> SELECT * FROM products;
101|Gadget, Deluxe|19.99
102|Widget (Basic)|5.50
103|Super Gizmo, Extra|29.95
Now it works. The .separator "," tells .import to use a comma as the delimiter. When it encounters a comma within double quotes, it understands that it’s part of the field’s content, not a separator. This is the key to handling standard CSV files with .import.
If your CSV uses a different delimiter, like a semicolon or tab, you’d set that with .separator. For example, for a semicolon-delimited file:
sqlite> .separator ";"
sqlite> .import --skip 1 my_semicolon_file.csv my_table
And for tab-delimited (TSV):
sqlite> .separator "\t"
sqlite> .import --skip 1 my_tsv_file.tsv my_table
The .mode csv command is primarily for exporting. When you run .mode csv and then SELECT * FROM my_table;, SQLite will format the output according to CSV rules, including quoting fields that contain commas or other special characters.
sqlite> .mode csv
sqlite> SELECT * FROM users;
"id","name","email"
1,"Alice Smith","alice.smith@example.com"
2,"Bob Johnson","bob.j@example.com"
3,"Charlie <charlie@example.com>","charlie@example.com"
Notice how the id and name fields are now quoted in the output, even though they don’t strictly need it, because .mode csv is designed to be safe. If a field contained a comma, it would definitely be quoted.
If your CSV file has a header row that you don’t want to skip during export (e.g., you’re exporting a table that already has a header-like row), you can use .import --no-skip which is the default behavior, or simply omit --skip.
The most surprising thing about .import is that .mode csv only affects output formatting, not the parsing behavior of .import itself. You need .separator and --skip to control how .import reads data.
When you’ve successfully imported a CSV using .separator "," and --skip 1, the next hurdle is often dealing with character encoding issues if your CSV isn’t UTF-8, or handling very large files where performance becomes a concern, potentially leading to timeouts or memory exhaustion if not managed carefully.