SQLite is an embedded database: no socket to open, you directly access to it via file system.
If you do not plan to use BigData with high number of writers, you will have an hard time beating SQLite on modern hardware, on average use cases.
I have written a super simple search engine [1] using python asyncio and SQLite is not the bottleneck so far.
If you are hitting the SQLite limit, I have an happy news: PostgreSQL upgrade will be enough for a lot of use cases [2]: you can use it to play with a schemaless mongo-like database, a simple queue system [3] or a search engine with stemming. After a while you can decide if you need a specialized component (i.e. Kafka, Elastic Search, etc) for one of your services.
[1]: https://github.com/daitangio/find
[2]: https://gioorgi.com/2025/postgres-all/
And instead were spent blocking on the disk for all of the extra queries that were made? Or is it trying to say that the concatenation a handful of strings takes 22 ms. Considering how much games can render with a 16 ms budget I don't see where that time is going rendering html.
It seems like it mostly comes down to how likely it is that the site will grow large enough to need a networked database. And people probably wildly overestimate this. HackerNews, for example, runs on a single computer.
That's before you even get into sharding sqlite.
[1] - https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...
Network adds latency and while it might be fine to run 500 queries with the database being on the same machine, adding 1-5ms per query makes it feel not okay.
> For stored procedures that contain several statements that don't return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced.
Maybe the page could have been shorter, but not my much.
Maybe this has been solved though? Anybody here running a serious backend-heavy app with SQLite in production and can share? How do you remotely edit data, do analytics queries etc on production data?
So the sqlite developers use their on versioning system which uses sqlite for storage. Funny.
Or am i mistaken in thinking that communicating to mysql on localhost is comparable latency to sqlite?
Of course, SQLite and client/server database servers have different use cases, so it is kind of an apples and oranges comparison.
SQLite is embedded in your program's address space. You call its functions directly like any other function. Depending on your language, there is probably some FFI overhead but it's a lot less than than an external localhost connection
I had to build some back-office tools and used Ruby on Rails with SQLITE and didn't bother with doing "efficient" joins or anything. Just index the foreign keys, do N+1s everywhere - you'll be fine. The app is incredibly easy to maintain and add features because of this and the db is super easy to backup - literally just scp the sqlite db file somewhere else. Couldn't be happier with this setup.
I.e. sometimes one query is cheaper. It is not network anymore.
Also you can run your "big" DB like postgres on the same machine too. No law against that.
Most SQLite queries however, are not analytic queries. They're more like record retrievals.
So hitting a SQLite table with 200 "queries" is similar hitting a webserver with 200 "GET" commands.
In terms of ergonomics, SQLite feels more like a application file-format with a SQL interface. (though it is an embedded relational database)