36

Here is what I currently have:

conn = sqlite3.connect(dbfile)
conn.text_factory = str ## my current (failed) attempt to resolve this
cur = conn.cursor()
data = cur.execute("SELECT * FROM mytable")

f = open('output.csv', 'w')
print >> f, "Column1, Column2, Column3, Etc."
for row in data:
  print >> f, row
f.close()

It creates a CSV file with output that looks like this:

Column1, Column2, Column3, Etc.
(1, u'2011-05-05 23:42:29',298776684,1448052234,463564768,-1130996322, None, u'2011-05-06 04:44:41')

I don't want the rows to be in parentheses nor have quotes nor the 'u' before strings. How do I get it to write the rows to csv without all of this? Thanks,

4 Answers 4

34

What you're currently doing is printing out the python string representation of a tuple, i.e. the return value of str(row). That includes the quotes and 'u's and parentheses and so on.

Instead, you want the data formatted properly for a CSV file. Well, try the csv module. It knows how to format things for CSV files, unsurprisingly enough.

with open('output.csv', 'w', newline='') as f:
    writer = csv.writer(f)
    writer.writerow(['Column 1', 'Column 2', ...])
    writer.writerows(data)

The newline='' is apparently needed for correct escaping of newlines inside quoted fields.

Sign up to request clarification or add additional context in comments.

4 Comments

@Dougal What is 'column 1'. Are these the database column's or are these the csv columns. Thanks.
@AdamAzam That's just literally printed out to the CSV file as the first (header) row. This snippet doesn't say anything about the database, just assumes it's gathered in a variable data in the order you want it.
writer.writerow(['Column 1', 'Column 2']) gives me error:TypeError: a bytes-like object is required, not 'str' . It is not the case when I open the file with w instead of wb. I commented out conn.text_factory = str in my code - I'm not sure whether it is relevant to the error I get.
Yes, sorry @JohnSmith, maybe this changed in the past 11 years or maybe I was wrong then – according to current docs it should be opened in text mode, not binary mode.
25

my version that works without issues with just a couple of lines.

import pandas as pd

conn = sqlite3.connect(db_file, isolation_level=None,
                       detect_types=sqlite3.PARSE_COLNAMES)
db_df = pd.read_sql_query("SELECT * FROM error_log", conn)
db_df.to_csv('database.csv', index=False)

If you want a table separated file, change the .csv extension to .tsv and add this sep='\t'

3 Comments

This is the best answer. For modern versions of Pandas, it's best to use read_sql, so you don't need to worry about lower level details. See here for more info.
@Powers it's fine until you have so much data that you need to stream it to the csv.
If the table is too big, this may cause system to hang due to memory insufficient.
15

Converting an sqlite database table to csv file can also be done directly using sqlite3 tools:

>sqlite3 c:/sqlite/chinook.db
sqlite> .headers on
sqlite> .mode csv
sqlite> .output data.csv
sqlite> SELECT customerid,
   ...>        firstname,
   ...>        lastname,
   ...>        company
   ...>   FROM customers;
sqlite> .quit

The above sqlite3 commands will will create a csv file called data.csv in your current directory (of course this file can be named whatever you choose). More details are available here: http://www.sqlitetutorial.net/sqlite-export-csv/

1 Comment

This is the best answer for me; thank you, it helped me a lot!
1

I used a silly but easier one if only one table exists!

First I converted the database to a Pandas DataFrame

conn = sqlite3.connect(database file)
df = pd.read_sql_query("select * from TABLE", conn)

Then convert df to a CSV file

df.to_csv(r'...\test_file.csv')

So a CSV file is creating under name of test_file.CSV

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.