← Back to home

SQLite notes

Some personal notes on SQLite.

Notes, links, snippets, thoughts, and more on sqlite.


Extensions (https://www.sqlite.org/loadext.html) are the catch-all way of doing sqlite customization. There’s a guy named Ricardo who wrote about how to use Rust for extensions: https://ricardoanderegg.com/posts/extending-sqlite-with-rust/. Another person on StackOverflow outlines how you can build a custom version of sqlite that contains the extensions: https://stackoverflow.com/questions/30898113/how-to-compile-an-extension-into-sqlite. Some useful extensions that others have created at here https://antonz.org/sqlean/.

When writing custom functions that need access to more data than a simple argument, you can use https://www.sqlite.org/c3ref/get_auxdata.html. For example, if you’re using a regex in a function, you don’t want to compile that on each function invocation, so you can do it once, and store it using the auxiliary data setter/getter. But you could also probably use information from the argument to lookup values on your own. For example, if the regex has the same hash or checksum as the last time you used it, just save it to a thread-local.

You can also write plain extensions, compile them to dylib files, then add them to a custom compilation plan to create your own custom build of sqlite. But generally this is done for custom command-line versions of sqlite, rather than something done for embedding.


Here are some performance things I’ve read about, observed, and experimented with. These are rough numbers. Some were just run locally on an M1 MacBook Air. I didn’t do them at the same time with the same code or schema, just collections of little benchmarks I ran between 2021-2023. So, you know, grain of salt, your mileage may vary, etc. If anything they’re a good indicator of the performance you’ll get out of the box, with a naive IO pattern.

If you’re reading off a single table with a single PK, and few other columns, the throughput is about 204 ops/ms, 204_000 ops/s. For comparison, in-memory sequential read for a slice of f64 values is close to 1B records in 24_000ms, 41_666 ops/ms, 41_666_000 ops/s. Another comparison is reading a single column CSV of f64 values off a modern SSD is somewhere around 17_000_000 ops/ms, 17_000_000 ops/s.

Single-threaded inserts to one table with a rowid PK, and small-ish columns, batched into 1000 records can be done as fast as 864_900 records in 16_200ms, 53 ops/ms, 53_000 ops/s.

Multithreaded inserts (4x, journal_mode=WAL) for the same data can be done about 20-30% faster than that.

The fastest way to do inserts seems to be tuning PRAGMAS, using batches and prepared statements, a single-writer, with single transactions. On the sqlite forum people are reporting that using a single threaded, single table, they can insert 100_000_000 records in 33_000ms, 3_030 ops/ms, 3_030_000 ops/s. I’ve been able to match and exceed this in my own benchmarks.

Some good stuff on the internet about pushing the limits:

On the whole, my tests were not optimized, and pretty rough. Doing these things can get massive results:


The API for sqlite is really solid. If you don’t want to have to write queries and bind variables, you can access lower-level functions listed https://www.sqlite.org/c3ref/funclist.html.


Temporary tables are specific to connection/database. They’re stored on disk, in a separate file, and are dropped when the connection is terminated. This blog post explains it a little https://antonz.org/temp-tables/, and the docs have more detail https://sqlite.org/tempfiles.html.

…they are not the same thing as :memory: dbs.


You can list the tables, and indexes in a database using a query like this.

SELECT name
FROM sqlite_schema;

Partial indexes are neat – https://www.sqlite.org/partialindex.html. Allows you to statically optimize your indexes by omitting some records if you know you’re not going to need to look them up.


Performance costs for database level statements.


An interesting post on how SQLite uses indexes: https://misfra.me/2022/sqlite-automatic-indexes/.

The tl;dr is that sqlite does loop-joins that are by default O(N * N), optimizing the loop order depending on the type of join, but will use an automatic (ie transient) index with a b-tree that behaves like a hash table, effectively achieving O(N log N). So instead of doing a full loop on one table followed by a full loop on another, it just scans the second table to build an index, then does the loop on that.

Like all db schemas, there’s a tradeoff here. Just build the index if you need to run joins more than you need to write quickly.


The docs about BEGIN CONCURRENT (https://www.sqlite.org/cgi/src/doc/begin-concurrent/doc/begin_concurrent.md) have some good info about how to use WAL mode, and multiple writers correctly. In short, you put it in WAL mode, use multiple writers, and as long as you’re writing to separate tables you’ll avoid conflicts, buy you may need to retry because you get an SQLITE_BUSY when committing during another writer’s commit.

Basically, there’s no way to do concurrent writes where don’t need you to use the busy_handler, but you can write to different tables and get some good speed.

Depending on your schema, I’d imagine you could implement table-specific locks in your application code to avoid conflict errors. But this is something that sqlite does for your, so there might not be any room for improvement here.


Really looking forward to the sqlite HC-tree backend, not just because services or other systems need to use highly concurrent writes. Regular desktop/local applications can benefit from multi-writer systems.


Some useful docs pages that I keep coming back to.


sqlite | sql | notes | code
2023-01-22