Building SQLite with a small swarm
Hope some find this post interesting on my experience with parallel coding agents.
  • comex
  • ·
  • 1 hour ago
  • ·
  • [ - ]
If it works, then it’s impressive. Does it work? Looking at test.sh, the oracle tests (the ones compared against SQLite) seem to consist in their entity of three trivial SELECT statements. SQLite has tens of thousands of tests; it should be possible to port some of those over to get a better idea of how functional this codebase is.

Edit: I looked over some of the code.

It's not good. It's certainly not anywhere near SQLite's quality, performance, or codebase size. Many elements are the most basic thing that could possibly work, or else missing entirely. To name some examples:

- Absolutely no concurrency.

- The B-tree implementation has a line "// TODO: Free old overflow pages if any."

- When the pager adds a page to the free list, it does a linear search through the entire free list (which can get arbitrarily large) just to make sure the page isn't in the list already.

- "//! The current planner scope is intentionally small: - recognize single-table `WHERE` predicates that can use an index - choose between full table scan and index-driven lookup."

- The pager calls clone() on large buffers, which is needlessly inefficient, kind of a newbie Rust mistake.

However…

It does seem like a codebase that would basically work. At a large scale, it has the necessary components and the architecture isn't insane. I'm sure there are bugs, but I think the AI could iron out the bugs, given some more time spent working on testing. And at that point, I think it could be perfectly suitable as an embedded database for some application as long as you don't have complex needs.

In practice, there is little reason not to just reach for actual SQLite, which is much more sophisticated. But I can think of one possible reason: SQLite has been known to have memory safety vulnerabilities, whereas this codebase is written in Rust with no unsafe code. It might eat your data, but it won't corrupt memory.

That is impressive enough for now, I think.

SQLite is tested against failure to allocate at every step of its operation: running out of memory never causes it to fail in a serious way, eg data loss. It's far more robust than almost every other library.
Unfortunately it is not so easy. If rigorous tests at every step were able to guarantee that your program can't be exploited, we wouldn't need languages like Rust at all. But once you have a program in an unsafe language that is sufficiently complex, you will have memory corruption bugs. And once you have memory corruption bugs, you eventually will have code execution exploits. You might have to chain them more than in the good old days, but they will be there. SQLite even had single memory write bugs that allowed code execution which lay in the code for 20 years without anyone spotting them. Who knows how many hackers and three letter agencies had tapped into that by the time it was finally found by benevolent security researchers.
  • gmerc
  • ·
  • 9 minutes ago
  • ·
  • [ - ]
Why do people fall for this. We're compressing knowledge, including the source code of SQLite into storage, then retrieve and shift it along latents at tremendous cost.
> 84 / 154 commits (54.5%) were lock/claim/stale-lock/release coordination.

Parallelism over one code base is clearly not very useful.

I don't understand why going as fast as possible is the goal. We should be trying to be as correct as possible. The whole point is that these agents can run while we sleep. Convergence is non linear. You want every step to be in the right direction. Think of it more as a series of crystalline database transactions that must unroll in perfect order than a big pile of rocks that needs to be moved from a to b.

I cant quite tell if the tests that passed were sqlites own famously thorough test suite, or your own.

If its sqlites suite then its great the models managed to get there, but one issue (without trying to be too pessimistic) is that the models had the test suite there to validate against. Sqlites devs famously spend more of their time making the tests than building the functionalities. If we can get AI that reliably defines the functionality of such programs by building the test suite over years of trial and error, then we'll have what people are saying

This blog post doesn't say anything about your experience.

How well does the resulting code perform? What are the trade-offs/limitations/benefits compared to SQLite? What problems does it solve?

Why did you use this process? this mixture of models? Why is this a good setup?

Take a look at SQLite’s test coverage. It’s impressive: https://sqlite.org/testing.html

590x the application code

The fact that AI agents can even build something that purports to be a working database is also impressive.

A small, highly experienced team steering Claude might be able to replicate the architecture and test suite reasonably quickly.

1-shotting something that looks this good means that with a few helping hands, small teams can likely accomplish decades of work in mere months.

Small teams of senior engineers can probably begin to replicate entire companies worth of product surface area.

What's the point of building something that already exists in open source. It's just going to use code that already exists. There's probably dozens of examples written by humans that it can pull from.
Did they pass all unit tests in the end ?
It doesn’t matter, just jump on the hype train!
  • shoo
  • ·
  • 41 minutes ago
  • ·
  • [ - ]
or jump off, and instead grab onto the (well-deserved) sqlite-test-suite hype train.
(I'm being sarcastic.)
  • k33n
  • ·
  • 12 minutes ago
  • ·
  • [ - ]
> There isn’t a great way to record token usage since each platform uses a different format, so I don’t have a grasp on which agent pulled the most weight

lol

  • ·
  • 2 hours ago
  • ·
  • [ - ]