SQLite Concurrent Access

If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.

For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)

For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.

If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.


Yes it does. Lets figure out why

SQLite is transactional

All changes within a single transaction in SQLite either occur completely or not at all

Such ACID support as well as concurrent read/writes are provided in 2 ways - using the so-called journaling (lets call it “old way”) or write-ahead logging (lets call it “new way”)

Journaling (Old Way)

In this mode SQLite uses DATABASE-LEVEL locking. This is the crucial point to understand.

That means whenever it needs to read/write something it first acquires a lock on the ENTIRE database file. Multiple readers can co-exist and read something in parallel

During writing it makes sure an exclusive lock is acquired and no other process is reading/writing simultaneously and hence writes are safe.

This is why here they’re saying SQlite implements serializable transactions

Troubles

As it needs to lock an entire database every time and everybody waits for a process handling writing concurrency suffers and such concurrent writes/reads are of fairly low performance

Rollbacks/outages

Prior to writing something to the database file SQLite would first save the chunk to be changed in a temporary file. If something crashes in the middle of writing into the database file it would pick up this temporary file and revert the changes from it

Write-Ahead Logging or WAL (New Way)

In this case all writes are appended to a temporary file (write-ahead log) and this file is periodically merged with the original database. When SQLite is searching for something it would first check this temporary file and if nothing is found proceed with the main database file.

As a result, readers don’t compete with writers and performance is much better compared to the Old Way.

Caveats

SQlite heavily depends on the underlying filesystem locking functionality so it should be used with caution, more details here

You're also likely to run into the database is locked error, especially in the journaled mode so your app needs to be designed with this error in mind


Yes, SQLite handles concurrency well, but it isn't the best from a performance angle. From what I can tell, there are no exceptions to that. The details are on SQLite's site: https://www.sqlite.org/lockingv3.html

This statement is of interest: "The pager module makes sure changes happen all at once, that either all changes occur or none of them do, that two or more processes do not try to access the database in incompatible ways at the same time"