Accelerate bulk insert using Django's ORM?

DjangoOptimizationOrmBulkinsert

Django Problem Overview


I'm planning to upload a billion records taken from ~750 files (each ~250MB) to a db using django's ORM. Currently each file takes ~20min to process, and I was wondering if there's any way to accelerate this process.

I've taken the following measures:

What else can I do to speed things up? Here are some of my thoughts:

Any pointers regarding these items or any other idea would be welcome :)

Django Solutions


Solution 1 - Django

Solution 2 - Django

This is not specific to Django ORM, but recently I had to bulk insert >60 Million rows of 8 columns of data from over 2000 files into a sqlite3 database. And I learned that the following three things reduced the insert time from over 48 hours to ~1 hour:

  1. increase the cache size setting of your DB to use more RAM (default ones always very small, I used 3GB); in sqlite, this is done by PRAGMA cache_size = n_of_pages;

  2. do journalling in RAM instead of disk (this does cause slight problem if system fails, but something I consider to be negligible given that you have the source data on disk already); in sqlite this is done by PRAGMA journal_mode = MEMORY

  3. last and perhaps most important one: do not build index while inserting. This also means to not declare UNIQUE or other constraint that might cause DB to build index. Build index only after you are done inserting.

As someone mentioned previously, you should also use cursor.executemany() (or just the shortcut conn.executemany()). To use it, do:

cursor.executemany('INSERT INTO mytable (field1, field2, field3) VALUES (?, ?, ?)', iterable_data)

The iterable_data could be a list or something alike, or even an open file reader.

Solution 3 - Django

Solution 4 - Django

I ran some tests on Django 1.10 / Postgresql 9.4 / Pandas 0.19.0 and got the following timings:

  • Insert 3000 rows individually and get ids from populated objects using Django ORM: 3200ms
  • Insert 3000 rows with Pandas DataFrame.to_sql() and don't get IDs: 774ms
  • Insert 3000 rows with Django manager .bulk_create(Model(**df.to_records())) and don't get IDs: 574ms
  • Insert 3000 rows with to_csv to StringIO buffer and COPY (cur.copy_from()) and don't get IDs: 118ms
  • Insert 3000 rows with to_csv and COPY and get IDs via simple SELECT WHERE ID > [max ID before insert] (probably not threadsafe unless COPY holds a lock on the table preventing simultaneous inserts?): 201ms

def bulk_to_sql(df, columns, model_cls):
    """ Inserting 3000 takes 774ms avg """
    engine = ExcelImportProcessor._get_sqlalchemy_engine()
    df[columns].to_sql(model_cls._meta.db_table, con=engine, if_exists='append', index=False)


def bulk_via_csv(df, columns, model_cls):
    """ Inserting 3000 takes 118ms avg """
    engine = ExcelImportProcessor._get_sqlalchemy_engine()
    connection = engine.raw_connection()
    cursor = connection.cursor()
    output = StringIO()
    df[columns].to_csv(output, sep='\t', header=False, index=False)
    output.seek(0)
    contents = output.getvalue()
    cur = connection.cursor()
    cur.copy_from(output, model_cls._meta.db_table, null="", columns=columns)
    connection.commit()
    cur.close()

The performance stats were all obtained on a table already containing 3,000 rows running on OS X (i7 SSD 16GB), average of ten runs using timeit.

I get my inserted primary keys back by assigning an import batch id and sorting by primary key, although I'm not 100% certain primary keys will always be assigned in the order the rows are serialized for the COPY command - would appreciate opinions either way.

Update 2020:

I tested the new to_sql(method="multi") functionality in Pandas >= 0.24, which puts all inserts into a single, multi-row insert statement. Surprisingly performance was worse than the single-row version, whether for Pandas versions 0.23, 0.24 or 1.1. Pandas single row inserts were also faster than a multi-row insert statement issued directly to the database. I am using more complex data in a bigger database this time, but to_csv and cursor.copy_from was still around 38% faster than the fastest alternative, which was a single-row df.to_sql, and bulk_import was occasionally comparable, but often slower still (up to double the time, Django 2.2).

Solution 5 - Django

There is also a bulk insert snippet at http://djangosnippets.org/snippets/446/.

This gives one insert command multiple value pairs (INSERT INTO x (val1, val2) VALUES (1,2), (3,4) --etc etc). This should greatly improve performance.

It also appears to be heavily documented, which is always a plus.

Solution 6 - Django

Also, if you want something quick and simple, you could try this: http://djangosnippets.org/snippets/2362/. It's a simple manager I used on a project.

The other snippet wasn't as simple and was really focused on bulk inserts for relationships. This is just a plain bulk insert and just uses the same INSERT query.

Solution 7 - Django

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionJonathan LivniView Question on Stackoverflow
Solution 1 - DjangoGaryView Answer on Stackoverflow
Solution 2 - DjangoYanshuai CaoView Answer on Stackoverflow
Solution 3 - DjangoIgnacio Vazquez-AbramsView Answer on Stackoverflow
Solution 4 - DjangoChrisView Answer on Stackoverflow
Solution 5 - DjangoSeauxView Answer on Stackoverflow
Solution 6 - DjangoSeauxView Answer on Stackoverflow
Solution 7 - DjangoIlia NovoselovView Answer on Stackoverflow