What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

Solution 1:

Here's how you can disable/enable indexes in SQL Server:

--Disable Index ALTER INDEX [IX_Users_UserID] SalesDB.Users DISABLE
GO
--Enable Index ALTER INDEX [IX_Users_UserID] SalesDB.Users REBUILD

Here are some resources to help you find a solution:

Some bulk loading speed comparisons

Use SqlBulkCopy to Quickly Load Data from your Client to SQL Server

Optimizing Bulk Copy Performance

Definitely look into NOCHECK and TABLOCK options:

Table Hints (Transact-SQL)

INSERT (Transact-SQL)

Solution 2:

You're already using SqlBulkCopy, which is a good start.

However, just using the SqlBulkCopy class does not necessarily mean that SQL will perform a bulk copy. In particular, there are a few requirements that must be met for SQL Server to perform an efficient bulk insert.

Further reading:

  • Prerequisites for Minimal Logging in Bulk Import
  • Optimizing Bulk Import Performance

Out of curiosity, why is your index set up like that? It seems like ContainerId/BinId/Sequence is much better suited to be a nonclustered index. Is there a particular reason you wanted this index to be clustered?

Solution 3:

My guess is that you'll see a dramatic improvement if you change that index to be nonclustered. This leaves you with two options:

  1. Change the index to nonclustered, and leave it as a heap table, without a clustered index
  2. Change the index to nonclustered, but then add a surrogate key (like "id") and make it an identity, primary key, and clustered index

Either one will speed up your inserts without noticeably slowing down your reads.

Think about it this way -- right now, you're telling SQL to do a bulk insert, but then you're asking SQL to reorder the entire table every table you add anything. With a nonclustered index, you'll add the records in whatever order they come in, and then build a separate index indicating their desired order.

Solution 4:

Have you tried using transactions?

From what you describe, having the server committing 100% of the time to disk, it seems you are sending each row of data in an atomic SQL sentence thus forcing the server to commit (write to disk) every single row.

If you used transactions instead, the server would only commit once at the end of the transaction.

For further help: What method are you using for inserting data to the server? Updating a DataTable using a DataAdapter, or executing each sentence using a string?

Solution 5:

BCP - it's a pain to set up, but it's been around since the dawn of DBs and it's very very quick.

Unless you're inserting data in that order the 3-part index will really slow things. Applying it later will really slow things too, but will be in a second step.

Compound keys in Sql are always quite slow, the bigger the key the slower.