• apgwoz
  • ·
  • 21 minutes ago
  • ·
  • [ - ]
Any good and honest tansu experience reports out there? Would be nice to understand how “bleeding edge” this actually is, in practice. The idea of a kafka compatible, but trivial to run, system like this is very intriguing!
  • nchmy
  • ·
  • 11 minutes ago
  • ·
  • [ - ]
I wonder how it compares to Redpanda
Great link. I've always been drawn to sqlite3 just from a simplicity and operational point of view. And with tools like "make it easy to replcate" Litestream and "make it easy to use" sqlite-utils, it just becomes easier.

And one of the first patterns I wanted to use was this. Just a read-only event log that's replicated, that is very easy to understand and operate. Kafka is a beast to manage and run. We picked it at my last company -- and it was a mistake, when a simple DB would have sufficed.

https://github.com/simonw/sqlite-utils https://litestream.io/

I love the idea of SQLite, but I actually really dislike using it.

I think part of my issue is that a lot of uses of it end up having a big global lock on the database file (see: older versions of Emby/Jellyfin) so you can't use it with multiple threads or processes, but I also haven't really ever find a case to use it over other options. I've never really felt the need to do anything like a JOIN or a UNION when doing local configurations, and for anything more complicated than a local configuration, I likely have access to Postgres or something. I mean, the executable for Postgres is only ten megs or twenty on Linux, so it's not even that much bigger than SQLite for modern computers.

  • mjmas
  • ·
  • 52 minutes ago
  • ·
  • [ - ]

  PRAGMA journal_mode = WAL;
And set the busy timeout tunction as well.

https://www.sqlite.org/c3ref/busy_timeout.html

  • ktzar
  • ·
  • 6 hours ago
  • ·
  • [ - ]
I didn't know about Tansu and probably would not use it for anything too serious (yet!). Bus as a firm believer of event sourcing and change of paradigm that Kafka brings this is certainly interesting for small projects.
Quite cool. 7000 records per second is usable for a lot of projects.

One note on the backup/migrate, I think you need a shared lock on the database before you copy the database. If you dont, the database can corrupt. SQLite docs have other recommendations too:

https://sqlite.org/backup.html

How does it compare to Redis streams with persistent storage?
everything is dead. what lives on is their protocol.

same for redis, kafka, ...