The idea: for single-server deployments, SQLite can handle 100k+ ops/sec with WAL mode, so why add infrastructure?
Two modes:
- Embedded: everything in-process, just `import` and go
- Server: run `bunqueue start`, connect multiple workers via TCP
Features: priorities, delays, retries, cron jobs, DLQ, job dependencies, BullMQ-compatible API. Trade-offs vs Redis:
- Not for multi-region distributed systems
- Best for single server or small clusters
Happy to answer any questions about the architecture!For us this is resulted in a big weak point on our architecture because when the service reboots both job pushing and job pulling stops, with the pushing being on the API side bringing the API down. With containers we could have multiple of them running at the same time, but the shared reading/writing of the abstract Redis locks itself.
We are considering BullMQ, because the architecture is sane: * job push: API writes to Redis * job pull: Worker reads from Redis then writes the completion.
How do you see this issue for Bunqueue? What happens when it goes down for 5 minutes, can the jobs be enqueued? Can you run multiple instances of it, failover?
Our throughput (jobs/sec) is small we do have 100k+ scheduled jobs anywhere from minutes to months from now.
Current state: bunqueue is single-server with SQLite persistence.
If the server goes down for 5 minutes, clients cannot push/pull during that window. However: the client SDK has automatic reconnection with exponential backoff + jitter, all data is safe on disk (SQLite WAL mode), and on restart active jobs are detected as stalled and re-queued automatically. Delayed/scheduled jobs resume from their run_at timestamps.
For your use case (100k+ scheduled jobs, low throughput): well-optimized. We use MinHeap + SQLite indexes for O(k) refresh where k = jobs becoming ready, not O(n) scan.
What bunqueue does NOT have today: no clustering, no multi-instance with shared state, no automatic failover, no replication.
What it does have: S3 automated backups (compressed, checksummed) for disaster recovery. A "durable: true" option for zero data loss on critical jobs. Zero external dependencies.
Roadmap: HA is something we're actively working toward. Native HA with leader election and replication. Managed cloud offering with automatic failover and geographic distribution.
Bottom line: if you need true HA today, BullMQ + Redis Sentinel/Cluster is the safer choice. bunqueue is for when you want simplicity, high performance (~100k jobs/sec), and can tolerate brief downtime with automatic recovery.
Anyway, it made me realize that there's really no reason you can't use a SQL database as a backing store for queue stuff. I should try building my own at some point.
The SQL-as-queue pattern is definitely underrated. Great to hear it worked well at that scale.
If you're too lazy to even write your own comments, I suspect you're too lazy to have written your own software.
At least preface your comment with "The LLM says" or preface your submission with "The LLM wrote this software".
When you repeat the same thing over and over you naturally end up with a tight version of it. That's not an LLM, that's just how it works when you talk about something a lot.
And honestly even if I did use an LLM to write a comment on HN, so what? The code is what matters.
Go run the benchmarks, read the source, open an issue if something breaks.
That's the part that actually counts.
I didn't say it sounded "polished", I said exactly the opposite.
> And honestly even if I did use an LLM to write a comment on HN, so what?
If we wanted to chat with bots, we know where to find them.
Part of what makes these forums fun is human responses. LLMs write "good enough" text but they come off as robotic and inhuman. The only reason to go onto one of these forums is to communicate with people. If I wanted to talk to a robot, I would talk to ChatGPT, which I can do as often as I want.
Using an LLM to polish grammar vs. having it generate opinions wholesale are different things.
Again, not necessarily saying that's what you did, just that that's the red flag.