I am using SQLite on paperless-ngx (an app to manage pdf [4]). It is quite difficult to beat SQLite if you do not have a very huge parallelism factor in writes.

SQLite is an embedded database: no socket to open, you directly access to it via file system.

If you do not plan to use BigData with high number of writers, you will have an hard time beating SQLite on modern hardware, on average use cases.

I have written a super simple search engine [1] using python asyncio and SQLite is not the bottleneck so far.

If you are hitting the SQLite limit, I have an happy news: PostgreSQL upgrade will be enough for a lot of use cases [2]: you can use it to play with a schemaless mongo-like database, a simple queue system [3] or a search engine with stemming. After a while you can decide if you need a specialized component (i.e. Kafka, Elastic Search, etc) for one of your services.

[1]: https://github.com/daitangio/find

[2]: https://gioorgi.com/2025/postgres-all/

[3]: https://github.com/daitangio/pque

[4]: https://docs.paperless-ngx.com

Also SQLite is 35% faster than the filesystem:

https://sqlite.org/fasterthanfs.html

>For a 50-entry timeline, the latency is usually less than 25 milliseconds. Profiling shows that few of those milliseconds were spent inside the database engine.

And instead were spent blocking on the disk for all of the extra queries that were made? Or is it trying to say that the concatenation a handful of strings takes 22 ms. Considering how much games can render with a 16 ms budget I don't see where that time is going rendering html.

There is some risk that, if you design your website to use a local database (sqlite, or a traditional database over a unix socket on the same machine), then switching later to a networked database is harder. In other words, once you design a system to do 200 queries per page, you’d essentially have to redesign the whole thing to switch later.

It seems like it mostly comes down to how likely it is that the site will grow large enough to need a networked database. And people probably wildly overestimate this. HackerNews, for example, runs on a single computer.

The thing is sqlite can scale further vertically than most network databases. In some context's like writes and interactive transactions it outright scales further. [1]

That's before you even get into sharding sqlite.

[1] - https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

I don't see how anyone would design a system that executes 200 queries per page. I understand having a system that is ín use for many many years and accumulates a lot of legacy code eventually ends up there, but designing? Never. That's not design, that's doing a bad job at design.
  • ctxc
  • ·
  • 50 minutes ago
  • ·
  • [ - ]
Sounds a bit like me, reading the comments before the article!
Did you read the OP?
The same is true for regular databases though, isn't it?

Network adds latency and while it might be fine to run 500 queries with the database being on the same machine, adding 1-5ms per query makes it feel not okay.

There's also the alternative of having a cluster with one local DB in each node
This feels like a very elaborate way of saying that doing O(N) work is not a problem, but doing O(N) network calls is.
As another example, a SQL Server optimization per https://learn.microsoft.com/en-us/sql/t-sql/statements/set-n...:

> For stored procedures that contain several statements that don't return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced.

Rather I think their point is that since O(N) is really X * N, it's not the N that gets you, it's the X.
  • zffr
  • ·
  • 1 hour ago
  • ·
  • [ - ]
IMO the page is concise and well written. I wouldn’t call it very elaborate.

Maybe the page could have been shorter, but not my much.

It's inline with what I perceive as the more informal tone of the sqlite documentation in general. It's slightly wordier but fun to read, and feels like the people who wrote it had a good time doing so.
It being so obvious, why is sqlite not the de facto standard?
No network, no write concurrency, no types to speak of... Where those things aren't needed, sqlite is the de facto standard. It's everywhere.
Perfect summary. I'll add: insane defaults that'll catch you unaware if you're not careful! Like foreign keys being opt-in; sure, it'll create 'em, but it won't enforce them by default!
Is it possible to fix some of these limitations by building DBMSes on top of SQLite, which might fix the sloppiness around types and foreign keys?
I haven't investigated this so I might be behind the times, but last I checked remotely managing an SQLite database, or having some sort of dashboarding tool run management reporting queries and the likes, or make a Retool app for it, was very messy. The benefit of not being networked becomes a downside.

Maybe this has been solved though? Anybody here running a serious backend-heavy app with SQLite in production and can share? How do you remotely edit data, do analytics queries etc on production data?

It is for use cases like local application storage, but it doesn't do well in (or isn't designed for) concurrent use cases like any networked services. SQLite is not like the other databases.
  • meken
  • ·
  • 27 minutes ago
  • ·
  • [ - ]
Side note - is this post accessible from the site somewhere? I don’t see where you’d find it (along with the C is Best post [1] shared here recently).

[1] https://sqlite.org/whyc.html

I do t have time to test myself now, but it would be interesting to see a proper benchmark. We all know it's not suitable for high write concurrency, but SQLite should be a very good amount faster for reads because of the lack of overhead. But how much faster is it really?
I’ve been experimenting with LiveStoreJS which uses a custom SQLite WASM binary for event sync, so for simplicity I’ve also used it for regular application data in browser and found no issues (yet). It surprised me that using a full database engine in memory could perform well vs native JS objects at scale but perhaps at scale is when it starts to shine. Just be wary of size limits beyond 16-20mb.
Has anyone tried using distributed versions of sqlite, such as rqlite? How reliable is it?
Make sure you click this link https://sqlite.org/src/timeline

So the sqlite developers use their on versioning system which uses sqlite for storage. Funny.

Yes. Git is the same way: it uses the Linux kernel for storage, and the Linux kernel is managed with Git. :P
  • 1f60c
  • ·
  • 1 hour ago
  • ·
  • [ - ]
And all made by a group of Christian friends
  • nchmy
  • ·
  • 1 hour ago
  • ·
  • [ - ]
The article doesnt make it at all clear what it is comparing to - mysql running remotely or on the same server? I'm sure sqlite still has less "latency" than mysql on localhost or unix socket, but surely not meaningfully so. So, is SQLite really just that much faster at any SELECT query, or are they just comparing apples and oranges?

Or am i mistaken in thinking that communicating to mysql on localhost is comparable latency to sqlite?

Even if you're on the same local server, you're still going over a socket to a different service, whereas with sqlite you remain in the same application / address space / insert words I don't fully understand here. So while client/server SQL servers are faster locally than on a remote server, they can (theoretically) never be as fast as SQLite in the same process.

Of course, SQLite and client/server database servers have different use cases, so it is kind of an apples and oranges comparison.

I think they're trying to not shame other services, but yes the comparison is vs networked whether that's local on loopback or not. For a small query, which is what they're talking about, it's not inconceivable that formatting into a network packet, passing through the userspace networking functions, into and through kernel, all back out the other side, then again for the response, is indeed meaningfully slower than a simple function call within the program.
Connecting to localhost still involves the network stack and a fair bit of overhead.

SQLite is embedded in your program's address space. You call its functions directly like any other function. Depending on your language, there is probably some FFI overhead but it's a lot less than than an external localhost connection

  • zffr
  • ·
  • 1 hour ago
  • ·
  • [ - ]
I think the most common set up is to have your application server and DB on different hosts. That way you can scale each independently.
Definitely was something surprising that I discovered when building with Sqlite recently. We're tought to avoid N+1 queries at almost any cost in RDBMs but in Sqlite, the N+1 can actually be the best option in most cases.

I had to build some back-office tools and used Ruby on Rails with SQLITE and didn't bother with doing "efficient" joins or anything. Just index the foreign keys, do N+1s everywhere - you'll be fine. The app is incredibly easy to maintain and add features because of this and the db is super easy to backup - literally just scp the sqlite db file somewhere else. Couldn't be happier with this setup.

quite interesting. So SQL patterns can be optimised differently in SQLite
One index scan beats 200 index lookups though surely?

I.e. sometimes one query is cheaper. It is not network anymore.

Also you can run your "big" DB like postgres on the same machine too. No law against that.

  • wenc
  • ·
  • 27 minutes ago
  • ·
  • [ - ]
For analytic queries, yes, a single SQL query often beats many small ones. The query optimizer is allowed to see more opportunities to optimize and avoid unnecessary work.

Most SQLite queries however, are not analytic queries. They're more like record retrievals.

So hitting a SQLite table with 200 "queries" is similar hitting a webserver with 200 "GET" commands.

In terms of ergonomics, SQLite feels more like a application file-format with a SQL interface. (though it is an embedded relational database)

https://www.sqlite.org/appfileformat.html

One query isn't cheaper than two queries that do the same amount of IO and processing and operate in the same memory space