Tell that to Synapse customers. Many millions of dollars are missing.

Banks have to follow strict rules to account for where all the money goes. But the way fintechs work, they usually just have one or a couple underlying "FBO" accounts where all the pooled money is held, but then the fintech builds a ledger on top of this (and, as the article points out, to varying levels of engineering competence) to track each individual customer's balance within this big pool of money. In Synapse's case, their ledger said the total amount of all of their individual customer balances ended up being much more than the actual funds held in the underlying FBO accounts. Lots of folks are assuming fraud but I'm willing to put money that it was just a shitty, buggy ledger.

FWIW, after seeing "how the sausage is made", I would never put money into a fintech depository account. Use a real bank. Fintechs also often put out the fake promise that deposits are FDIC insured, but this only protects you if the underlying bank goes belly up, not if the fintech loses track of your money.

See https://www.forbes.com/sites/zennonkapron/2024/11/08/what-th...

90 million dollars missing and 250 million dollars frozen. That 250m probably needed by some people to pay rent.

Backed by Andreesen Horowitz who are conducting a scorched earth jihad against all government regulation.

https://finance.yahoo.com/personal-finance/synapse-bankruptc...

The sad thing is that most people don't learn lessons from history. It took me far too long to start learning lessons from history.

After an asset bubble and collapse people will understand why we have a lot of the regulations from the 1930s.

Sadly I don’t think it will happen.

In previous crises people could depend being educated by mostly responsible media . Today both mainstream and social are entertainment first don’t care for truth or their role in educating the society .

It is more likely they will be taught to blame some boogeymen who have nothing to do with the problem rather address the real one .

Oh no, the medias got actually way more responsible after the World War. Before the war, mainstream media were more like super-biased propaganda machines. Only after the world wars society realized they need more toned-down truth-first analytical medias.
I agree , I elaborated on this in a child comment.

Just a minor point I would say it is ~120 years rather than 70. There was influential journalism through the early 20th century when professional journalism took shape which influenced policy of note the anti trust actions taken under Sherman Act in early 1900s.

Media was never all that responsible.
Media biases and ethics in journalism are masters degrees sized topics of their own.

Very briefly you are not wrong, yellow journalism has a long history is not new. In America from the times of federalist papers through slavery and jim crow and antisemitism of the 30s, to civil rights and into modern times it has been a powerful tool to shape public opinion.

There is nuance to this however, the era of professional journalism has been brief, only 100 or so years. In that time, media had the most impact in shinning the light on the truth, notably with reporting on Watergate, Pentagon papers or Hershey on Hiroshima and so on, that era is coming to an end.

As the cost of publishing drops orders of magnitude in every generation of technology as it happened with cheap and fast printing press, radio, broadcast and then cable TV and finally the internet and mobile, the problem becomes more massive and much harder to regulate, and also drops in the quality of public discourse and nature of truth.

Basically it boils down to we had a good(relatively) 100 year run with media and corresponding improvement in civil liberties and good governance, we can no longer depend on educated public taking decisions sooner or later in the right direction like we have last century or so.

  • oblio
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Conceptually regulation is "not very complicated".

1. Bring back those laws requiring fairness of media representation.

2. Force standardized disclosure of sponsored content of any type (total, segments, placement). Many countries already do this. Standardized = big, high contrast "Ad" sign in the corner with mandatory size proportional to content size.

3. Mandate providing sources.

4. Treat all influencers with an audience above NNN followers (10000?) as mass media.

5. Require that widely shared content is fact checked and that fact checking is automatically included while sharing and provide recourse for fact checking up to the legal system.

6. For sensitive topics (politics, primarily) require AML and KYC disclosures of funding, primarily to find foreign funding sources and propaganda.

However, you know, vested interests, the bane of humanity.

> Bring back those laws requiring fairness of media representation.

There is no way for this not being censorship and not being used to suppress less powerful opposition. Which is exactly how it was used in the past. Plus, just look what both sidesm currently does - it motivates journalists to write as if both sides were equal in situation where they clearly are not.

> Require that widely shared content is fact checked and that fact checking is automatically included while sharing and provide recourse for fact checking up to the legal system.

Fact checking is irrelevant to public opinion. And again, it is not that difficult to bias it.

Confidential sources are necessary for lots of whistleblower based reporting.

Fairness of media representation seems hard to define and prone to abuse.

But mandating the the financial conflicts be disclosed and ads labeled seems reasonable.

As much as I agree, you might run into a challenge with case law in the US. IANAL, but I reckon Near v. Minnesota (https://en.m.wikipedia.org/wiki/Near_v._Minnesota) is relevant here.
Problem is: some people will demand their free speech rights are being violated. The legal system is a weak guarantee: just check how the legal system works in a dictatorship. Or if a political faction decide to throw a lot of money into fake news and opinion laundering.
It is baffling to me how "free speech" has come to mean "freedom to use mass broadcasting systems".

Of course anyone should be free to publicly say anything, however untrue it might be.

Should they be free to broadcast their nonsense to million of people?

I don't know, but I do feel these are two different things.

They are different: in the U.S. that's why "freedom of the press" is also written down in the First Amendment, and historically, that's exactly how the U.S. courts have interpreted the phrase "freedom of the press" - as a (pretty) general right for anyone to use any media technology they can access to spread any ideas they want. There are always some limits, but from the start "the press" meant "the printing press", not "institutionalized news organizations". It's a general technology-usage right, not a specialized right for a certain group. Everyone is allowed to do more than just talk, or even shout. People can have different opinions on how wise that right is, but in the US at least, you are indeed free to broadcast your nonsense to millions of people, if you have the resources.
> broadcast your nonsense to millions of people, if you have the resources

Spot on, today you can do that as close to free as possible. In the eras past that was not possible it was expensive so only few could do it and that served as a moderating influence, it was not easy for fringe beliefs to become mainstream. The gatekeeping had the downside of suppressing voices particularly minority and oppressed voices so it was not all rosy.

The only thing we know is we can no longer use the past as reference to model how the future of politics , governance or media will be, which institutions will survive and in what distorted versions in say even 10-30 years.

>> broadcast your nonsense to millions of people, if you have the resources

> Spot on, today you can do that as close to free as possible.

Are you sure?

You can author a tweet for free, yeah. Then you let Musk do the broadcasting if he so pleases. Users have no control over the broadcasting, platform do.

You will. Your kids won’t.
That's not really a lot compared to what Wallstreet is steeling daily.
  • ozim
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Ugh that’s not a quality on its own.

Someone who „steals less” is not better.

Still fucking thieves.

  • ozim
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You do understand what you described is basically Bolshevik/French revolution only different times.

Some men with some power using young starry eyed young people to grab much more power from incumbents.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
At a big co I worked at, the lack of consistency between trading systems caused money to (dis)appear (into)out of thin air.

Prior to one of these hiccups, I hypothesized, given how shitty the codebase was, that they must be tracking this stuff poorly.

This led to an argument with my boss, who assumed things magically worked.

Days later, we received an email announcing an audit one one of these accounting discrepancies.

JPMC proposed using crypto, internally, to consistently manage cash flow.

Not sure if it went anywhere.

At all of the exchanges and trading firms I’ve worked with (granted none in crypto) one of the “must haves” has been a reconciliation system out of band of the trading platforms. In practice one of these almost always belongs to the risk group (this is usually dependent on drop copy), but the other is entirely based on pcaps at the point of contact with every counterparty and positions/trades reconstructed from there.

If any discrepancies are found that persist over some time horizon it can be cause to stop all activity.

  • ajb
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Wait, pcap as in wireshark packet capture?
I'm not the commenter, but yes, often trading firms record all order gateway traffic to from brokers or exchanges at the TCP/IP packet level, in what are referred to as "pcap files". Awkwardly low-level to work with, but it means you know for sure what you sent, not what your software thought it was sending!
The ultimate source of truth about what orders you sent to the exchange is the exact set of bits sent to the exchange. This is very important because your software can have bugs (and so can theirs), so using the packet captures from that wire directly is the only real way to know what really happened.
But then the software capturing, storing and displaying the packets can also have bugs.
Among all the software installed in a reputable Linux system, tcpdump and libpcap are some of the most battle tested pieces one can find.

Wireshark has bugs, yes. Mostly in the dissectors and in the UI. But the packet capture itself is through libpcap. Also, to point out the obvious: pcap viewers in turn are auditable if and when necessary.

Cisco switches can mirror ports with a feature called Switch Port Analyzer (SPAN). For a monitored port, one can specify the direction (frames in, out, or both), and the destination port or VLAN.

SPAN ports are great for network troubleshooting. They're also nice for security monitors, such as an intrusion detection system. The IDS logically sees traffic "on-line," but completely transparent to users. If the IDS fails, traffic fails open (which wouldn't be acceptable in some circumstances, but it all depends on your priorities).

When I think "Cisco" I think error-free. /s

No, really, I get where you and your parent are coming from. It is a low probability. But occasionally there is also thoroughly verified application code out there. That is when you are asking yourself where the error really is. It could be any layer.

They can, but it’s far less likely to be incorrect on the capture side. They are just moving bytes, not really doing anything with structured data.

Parsing the pcaps is much more prone to bugs than capturing and storing, but that’s easier to verify with deserialize/serialize equality checks.

The result of bitter lessons learnt I'm sure. Lessons the fintechs have not learned.
  • baq
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
That makes sense - but it's still somewhat surprising that there's nothing better. I guess that's the equivalent of the modern paper trail.
It’s the closest to truth you can find (the network capture, not the drop copy). If it wasn’t on the network outbound, you didn’t send it, and it’s pretty damn close to an immutable record.
  • ajb
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
It makes sense. I'm a little surprised that they'd do the day to day reconciliation from it but I suppose if you had to write the code to decode them anyway for some exceptional purpose, you might as well use it day to day as well.
The storage requirements of this must be impressive
Storage is cheap, and the overall figures are not that outlandish. If we look at a suitable first page search result[0], and round figures up we get to about 700 GB per day.

And how did I get that figure?

I'm going to fold pcap overhead into the per-message size estimate. Let's assume a trading day at an exchange, including after hours activity, is 14 hours. (~50k seconds) If we estimate that during the highest peaks of trading activity the exchange receives about 2M messages per second, then during more serene hours the average could be about 500k messages per second. Let's guess that the average rate applies 95% of the time and the peak rate the remaining 5% of the time. That gives us an average rate of about 575k messages per second. Round that up to 600k.

If we assume that an average FIX message is about 200 bytes of data, and add 50 bytes of IP + pcap framing overhead, we get to ~250 bytes of transmitted data per message. At 600k messages per second, 14 hours a day, the total amount of trading data received by an exchange would then be slightly less than 710GB per day.

Before compression for longer-term storage. Whether you consider the aggregate storage requirements impressive or merely slightly inconvenient is a more personal matter.

0: https://robertwray.co.uk/blog/the-anatomy-of-a-fix-message

  • tetha
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
And compression and deduplication should be very happy with this. A lot of the message contents and the IP/pcap framing overheads should be pretty low-entropy and have enough patterns to deduplicate.

It could be funny though because you could be able to bump up your archive storage requirements by changing an IP address, or have someone else do that. But that's life.

  • oblio
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Why? They're not streaming 4k video, it's either text protocol or efficient binary protocols.
  • w23j
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I would also really like to know that!

It generally seems to be a thing in trading: https://databento.com/pcaps

There is also this (though this page does not specify what pcap means): https://www.lseg.com/en/data-analytics/market-data/data-feed...

Look up Corvil devices by Pico.

Commonly used in finance.

https://www.pico.net/corvil-analytics/

Typically not a literal pcap. Not just wireshsrk running persistently everywhere.

There are systems you can buy (eg by Pico) that you mirror all traffic to and they store it, index it, and have pre-configured parsers for a lot of protocols to make querying easier.

Think Splunk/ELK for network traffic by packet.

Except it is literal “pcap” as they capture all packets at layer 3. I don’t know the exact specifications of Pico appliances, but it would not surprise me they’re running Linux + libpcap + some sort of timeseries DB
Well, probably, but I meant more like it's not typically someone running tcpdump everywhere and someone analyzing with Wireshark, rather than a systems configured to do this at scale across the desktop.
I don't think that's what anyone was assuming. A "pcap" is a file format for serialized network packets, not a particular application that generates them.
The Corvil devices used by Pico have IME largely been replaced by Arista 7130 Metamux platforms at the capture “edge”
Which is great for the companies that have made the switch because those corvils were truly terrible.
Looks like tnlnbn already answered, but the other benefit to having a raw network capture is often this is performed on devices (pico and exablaze just to name two) that provide very precise timestamping on a packet by packet basis, typically as some additional bytes prepended to the header.

Most modern trading systems performing competitive high frequency or event trades have performance thesholds in the tens of nanos, and the only place to land at that sort of precision is running analysis on a stable hardware clock.

  • Loic
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I suppose Pre-Calculated Aggregated Positions, but I am not an expert in the field.
Looking at the order messages sent to and received from another trading system was not uncommon when I worked in that neck of the woods
The crypto firms are moving fast and breaking things. No need for that kind of safety shit, right? Would slow things down. Reminds me of Boeing.
So is this capture used to reconstruct FIX messages?
Yeah, FIX or whatever proprietary binary fixed-length protocols (OUCH or BOE for example) the venue uses for order instructions.

Some firms will also capture market data (ITCH, PITCH, Pillar Integrated) at the edge of the network at a few different cross connects to help evaluate performance of the exchange’s edge switches or core network.

  • phire
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Fun fact, centralized crypto exchanges don't use crypto internally, it's simply too slow.

As a contractor, I helped do some auditing on one crypto exchange. At least they used a proper double-entry ledger for tracking internal transactions (built on top of an SQL database), so it stayed consistent with itself (though accounts would sometimes go negative, which was a problem).

The main problem is that the internal ledger simply wasn't reconciled with with the dozens of external blockchains, and problems crept in all the time.

  • oblio
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> Fun fact, centralized crypto exchanges don't use crypto internally, it's simply too slow.

I know you're not arguing in their favor, just describing a reality, but the irony of that phrase is through the roof :-)))

Especially the "centralized crypto".

  • phire
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Yeah, that fact alone goes a long way to proving there is no technical merit to cryptocurrencies.

The reason they are now called "centralised crypto exchanges" is that "decentralised crypto exchanges" now exist, where trades do actually happen on a public blockchain. Though, a large chunk of those are "fake", where they look like a decentralised exchange, but there is a central entity holding all the coins in central wallets and can misplace them, or even reverse trades.

You kind of get the worst of both worlds, as you are now venerable to front-running, they are slow, and the exchange can still rug pull you.

The legit decentralised exchanges are limited to only trading tokens on a given blockchain (usually ethereum), are even slower, are still vulnerable to front-running. Plus, they spam those blockchains with loads of transactions, driving up transaction fees.

> JPMC proposed using crypto, internally, to consistently manage cash flow.

Yikes, how hard is it to just capture an immutable event log. Way cheaper than running crypto, even if only internally.

Harder than you'd think, given a couple of requirements, but there are off the shelf products like AWS's QLDB (and self hosted alternatives). They: Merkle hash every entry with its predecessors; normalize entries so they can be consistently hashed and searched; store everything in an append-only log; then keep a searchable index on the log. So you can do bit-accurate audits going back to the first ledger entry if you want. No crypto, just common sense.

Oddly enough, I worked at a well known fintech where I advocated for this product. We were already all-in on AWS so another service was no biggie. The entrenched opinion was "just keep using Postgres" and that audits and immutability were not requirements. In fact, editing ledger entries (!?!?!?) to fix mistakes was desirable.

> The entrenched opinion was "just keep using Postgres" and that audits and immutability were not requirements.

If you're just using PG as a convenient abstraction for a write-only event log, I'm not completely opposed; you'd want some strong controls in place around ensuring the tables involved are indeed 'insert only' and have strong auditing around both any changes to that state as well as any attempts to change other state.

> In fact, editing ledger entries (!?!?!?) to fix mistakes was desirable.

But it -must- be write-only. If you really did have a bug fuck-up somewhere, you need a compensating event in the log to handle the fuck-up, and it better have some sort of explanation to go with it.

If it's a serialization issue, team better be figuring out how they failed to follow whatever schema evolution pattern you've done and have full coverage on. But if that got to PROD without being caught on something like a write-only ledger, you probably have bigger issues with your testing process.

Footnote to QLDB: AWS has deprecated QLDB[1]. They actually recommend using Postgres with pgAudit and a bunch of complexity around it[2]. I'm not sure how I feel about such a misunderstanding of one's own offerings of this level.

[1] https://docs.aws.amazon.com/qldb/latest/developerguide/what-...

[2] https://aws.amazon.com/blogs/database/replace-amazon-qldb-wi...

Yeah. I'm surprised it didn't get enough uptake to succeed, especially among the regulated/auditable crowds, considering all the purpose built tech put into it.
I think you're forgetting how many businesses are powered by Excel spreadsheets. This solution seems too advanced and too auditable.
  • baq
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I'll just leave that here for no particular reason at all:

https://www.sec.gov/enforcement-litigation/whistleblower-pro...

Better hurry, Elon is gonna dismantle the SEC in about 45 days
Importantly, the SEC is empowered to give 10-30% of the money siezed via whistleblowing too the whistle blower.
> Merkle hash every entry with its predecessors; normalize entries so they can be consistently hashed and searched; store everything in an append-only log;

Isn’t this how crypto coins work under the hood? There’s no actual encryption in crypto, just secure hashing.

Theoretically they even have a better security environment (since it is internal and they control users, code base and network) so the consensus mechanism may not even require BFT.
It's all merkle trees under the hood. I feel like the crypto coin stuff has overshadowed the useful bits.
  • trog
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Is a Merkle tree needed or is good old basic double ledger accounting in a central database sufficient? If a key requirement is not a distributed ledger then it seems like a waste of time.
  • Onavo
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Merkle tree is to prevent tampering, not bad accounting practices
  • nly
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
It only prevents tampering if the cost of generating hashes is extremely high.

Internally in your company you're not going to spend millions of $'s a year in GPU compute just to replace a database.

"Prevents tampering" lacks specificity. git is a blockchain that prevents tampering in some aspects, but you can still force push if you have that privilege. What is important is understand what the guarantees are.
? If I use something like Blake3 (which is super fast and emits gobs of good bits) and encode a node with say 512 bits of the hash, you are claiming that somehow I am vulnerable to tampering because the hash function is fast? What is the probable number of attempts to forge a document D' that hashes to the very same hash? And if the document in structured per a standard format, you have even less degrees of freedom in forging a fake. So yes, a Merkel tree definitely can provide very strong guarantees against tampering.
Fwiw, increasing the BLAKE3 output size beyond 256 bits doesn't add security, because the internal "chaining values" are still 256 bits regardless of the final output length. But 256 bits of security should be enough for any practical purpose.
Good to know. But does that also mean that e.g. splitting the full output to n 256 chunks would mean there is correlation between the chunks? (I always assumed one could grab any number of bits (from anywhere) in a cryptographic hash.)
You can take as many bytes from the output stream as you want, and they should all be indistinguishable from random to someone who can't guess the input. (Similar to how each of the bytes of a SHA-256 hash should appear independently random. I don't think that's a formal design goal in the SHA-2 spec, but in practice we'd be very surprised and worried if that property didn't hold.) But for example in the catastrophic case where someone found a collision in the default 256-bit BLAKE3 output, they would probably be able to construct colliding outputs of unlimited length with little additional effort.
Certificate transparency logs achieve tamper-resistance without expensive hashes.
Write-Once, Read Many drives also prevent tampering. Not everything needs crypto.
In a distributed setting where a me may wish to join the party late and receive a non-forged copy, it’s important. The crypto is there to stand in for an authority.
  • trog
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> In a distributed setting where a me may wish to join the party late and receive a non-forged copy, it’s important. The crypto is there to stand in for an authority.

Yeh, but that's kinda my point: if your primary use case is not "needs to be distributed" then there's almost never a benefit, because there is always a trusted authority and the benefits of centralisation outweigh (massively, IMO) any benefit you get from a blockchain approach.

100% agreed there. A central authority can just sign stuff. Merkle trees can still be very valuable for integrity and synchronization management, but burning a bunch of energy to bogo-search nonces is silly if the writer (or federated writers) can be cryptographic authorities.
We launched Fireproof earlier this month on HN. It’s a tamperproof Merkle CRDT in TypeScript, with an object storage backend for portability.

See our Show HN: https://news.ycombinator.com/item?id=42184362

We’ve seen interest from trading groups for edge collaboration, so multi-user apps can run on-site without cloud latency.

What disrespectful marketing. We don’t care that you use Merkle trees because that’s irrelevant. I guess I can add Fireproof to my big list of sketchy products to avoid. It’s embarrassing.
I figured the responses would be more interesting. Questions about CRDT guarantees etc.

Perhaps worth seeding the convo with a remark about finality.

While your intentions may have been around discussion, I don’t want to be marketed to when I’m trying to understand something unrelated. I have a business degree so I intimately understand that HN is technically free and it’s nice to get free eyeballs, but we are people too. I’m so much more than a credit card number, yet you’ve reduced me to a user acquisition in the most insulting way possible.

Perhaps instead of your ideas, it’s worth seeding your own personal make up with a firm statement of ethics??

Are you the kind of person who will hijack conversations to promote your product? Or do you have integrity?

Just purely out of concern for your business, do you have a cofounder who could handle marketing for you? If so, consider letting her have complete control over that function. It’s genuinely sad to see a founder squander goodwill on shitty marketing.

In founder mode, I pretty much only think about these data structures. So I am (admittedly) not that sensitive to how it comes across.

Spam would be raising the topic on unrelated posts. This is a context where I can find people who get it. The biggest single thing we need now is critical feedback on the tech from folks who understand the area. You’re right I probably should have raised the questions about mergability and finality without referencing other discussions.

Because I don’t want to spam, I didn’t link externally, just to conversation on HN. As a reader I often follow links like this because I’m here to learn about new projects and where the people who make them think they’ll be useful.

ps I emailed the address in your profile, I have a feeling you are right about something here and I want to explore.

> Spam would be raising the topic on unrelated posts.

I think you need to reread the conversation, because you did post your marketing comment while ignoring the context, making your comment unrelated.

If you want it distilled down from my perspective, it went something like this:

> Trog: Doubts about the necessity of Merkle trees. Looking for a conversation about the pros and cons of Merkle trees and double ledger accounting.

> You: Look at our product. Incidentally it uses Merkle trees, but I am not going to mention anything about their use. No mention of pros and cons of Merkle trees. No mention of double ledger accounting.

This doesn't address the question in any way except to note that you also use Merkle Trees. Do you reply to any comment mentioning TypeScript with a link to your Show HN post as well?
Sorry, but your post came off as blatant advertising. There is no need to link to your company announcement just because it benefits you.
Thanks y'all -- feedback taken. If I were saying it again I'd say something like:

Merkle proofs are rad b/c they build causal consistency into the protocol. But there are lots of ways to find agreement about the latest operation in distributed systems. I've built an engine using deterministic merge -- if anyone wants to help with lowest common ancestor algorithms it's all Apache/MIT.

While deterministic merge with an immutable storage medium is compelling, it doesn't solve the finality problem -- when is an offline peer too out-of-date to reconcile? This mirrors the transaction problem -- we all need to agree. This brings the question I'm curious about to the forefront: can a Merkle CRDT use a Calvin/Raft-like agreement protocol to provide strong finality guarantees and the ability to commit snapshots globally?

Apologies for the noise.

Crypto/Blockchain makes it harder to have an incorrect state. If you fk up, you need to take down the whole operation and reverse everything back to the block in question. This ensures that everything was accounted for. On the other hand, if you fk in a traditional ledger system you might be tempted to keep things running and resolve "only" the affected accounts.
It's a question of business case. While ensuring you are always accounted correctly seems like a plus, if errors happen too often potentially due to volume, it makes more business sense sometimes to handle it while running rather than costing the business millions per minute having a pause.
It's mostly a different approach to "editing" a transaction.

With a blockchain, you simply go back, "fork", apply a fixed transaction, and replay all the rest. The difference is that you've got a ledger that's clearly a fork because of cryptographic signing.

With a traditional ledger, you fix the wrong transaction in place. You could also cryptographically sign them, and you could make those signatures depend on previous state, where you basically get two "blockchains".

Distributed trust mechanisms, usually used with crypto and blockchain, only matter when you want to keep the entire ledger public and decentralized (as in, allow untrusted parties to modify it).

> With a traditional ledger, you fix the wrong transaction in place.

No you don’t. You reverse out the old transaction by posting journal lines for the negation. And in the same transactions you include the proper booking of the balance movements.

You never edit old transactions. It’s always the addition of new transactions so you can go back and see what was corrected.

Right, thanks for the correction: I wanted to highlight the need for "replaying" all the other transactions with a blockchain.
In git terms, it's like `revert` Vs `rebase`.
> With a traditional ledger, you fix the wrong transaction in place.

That's not how accounting works. You post a debit/credit note.

> With a blockchain, you simply go back, "fork", apply a fixed transaction, and replay all the rest.

You're handwaving away a LOT of complexity there. How are users supposed to trust that you only fixed the transaction at the point of fork, and didn't alter the other transactions in the replay?

My comment was made in a particular context. If you can go back, it's likely a centralized blockchain, and users are pretty much dependent on trusting you to run it fairly anyway.

With a proper distributed blockchain, forks survive only when there is enough trust between participating parties. And you avoid "editing" past transactions, bit instead add "corrective" transactions on top.

If its for internal why not just use a normal append only log. x amount transferred from account y to account z. A three column csv oughta do it.
  • sneak
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Any time your proposal entails a “why not just”, it is almost certainly underestimating the mental abilities of the people and teams who implemented it.

A good option is “what would happen if we” instead of anything involving the word “just”.

“Just” usually implies a lack of understanding of the problem space in question. If someone says “solution X” was considered because of these factors which lead to these tradeoffs however since then fundamental assumption Y has changed which allows this new solution then it’s very interesting.
Sure. When I ask "why don't we just" I'm suggesting that the engineering solutions on the table sound over-engineered to the task, and I'm asking why we aren't opting for a straightforward, obvious, simple solution. Sometimes the answer is legitimate complexity. Equally as often, especially with less experienced engineers, the answer is that they started running with a shiny and didn't back up and say "why don't we just..." themselves.
Counterfactuals strike me as even less useful than underestimating competency would be. Surely basic double-entry accounting (necessarily implying the use of ledgers) should be considered table stakes for fintech competency.
Lots of threads on this here, most recently https://news.ycombinator.com/item?id=42038139#42038572 . I think this example is perfect, with the "oughta do it"
it literally ledger, its only show where money went but not showing "why" the money move

double entry with strong check that ensure its always balance fix this

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> I hypothesized, given how shitty the codebase was, that they must be tracking this stuff poorly.

That is like half of the plot of Office Space

This sounds like a situation that I know about at the placed identified by name in your comment. It took months to track down the issue.
Synapse says that it was actually the Bank (Evolve) that made the accounting mistakes, including missing transactions, debits that weren't reported, sending in flight transaction to Mercury while debiting Synapse incorrectly etc.

https://lex.substack.com/p/podcast-what-really-happened-at-s...

Thanks for posting this, I will definitely listen to it.

While I haven't listened yet, one thing I don't really buy when it comes to blaming Evolve is that it should fundamentally be Synapse's responsibility to do reconciliation. This is what completely baffled me when I first worked with another BaaS company - they weren't doing any reconciliation of their ledgered accounts with the underlying FBO balances at the partner bank! This was insane to me, and it sound likes Synapse didn't do it either.

So even if Evolve did make accounting mistakes and have missing transactions, Synapse should have caught this much earlier by having regular reconciliations and audits.

They claim they did, Evolve kept putting them off, until they ran out of money.

There's a full transcript (with some images) below the player btw.

Rambling interview. As best as I can tell Synapse said there were technical issues with Evolve the bank.

Meanwhile this article said Synapse withdrew from Evolve the end user funds. Mr. Holmes of Evolve said the bank “transferred all end user funds” to other banks at the request of Synapse, but declined to identify them

https://www.nytimes.com/2024/07/09/business/synapse-bankrupt...

I'm sure the spokesperson for Evolve who then says "“It’s complicated,” he wrote in an email Friday, declining to elaborate further."" is fully trustworthy and not eliding any important details.
Wise also recently switched their US bank provider from Evolve to Community Federal Savings Bank. Maybe they had similar issues?
  • Aspos
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I see no reason why CFSB would be in any way different from Evolve, they are just not caught up in the mess yet.

Synapse problem was fundamental and it stems from the same mistake OP is making: never ever build your own, homegrown ledger if you can avoid it.

You can't avoid this. It is either your client or bank's client. And no bank will take the burden to account every $0.2 transaction for you, spending its own computing power. It just a quite expensive thing to do. That is why banks often separate the main ledger and retail ledger[s]. Each system tuned for a different performance profile.
  • Aspos
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
One should not build their own cryptography, one should not build their own ledger, that's what I am saying.
I've seen a lot of ledgers. Crypto is much more complicated.

In the end of the day you provide a full stack or just do UI/marketing. This is a good old vertical integration dilemma.

From a cursory look at how it describes itself (BaaS, etc), Evolve is hardly a "bank" in the traditional sense of the word.
Sankaet is full of shit.
I had money disappear from my HSBC account. As in, the double entries didn't match by a small amount (it was a payment between two of my accounts that didn't match up, which I couldn't trivially reconcile in the books). I pursued this for a while but they never even properly acknowledged it let alone fix it.

I had my unfounded suspicion it was some internal subtle theft going on, but incompetence is probably a better explanation.

If you live in a developed country it should be sufficient to ask them to account for it with a note that a formal complaint will be sent to relevant authorities if nor dealt with in timely manner.

That stuff like this is in order is the foundation of kapital societies and is taken quite seriously.

You'd think so wouldn't you? But alas, the effort required to solve this was at least more than I was willing to make.
If in the US, the CFPB would handle this for you.
Funnily enough, that's an agency Musk wants to gut: https://www.politico.com/live-updates/2024/11/27/congress/de...
  • pera
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I had a similar situation with Santander many years ago: it was a small amount and happened when I was closing my account, bank manager couldn't explain it and escalating the problem was a pain - especially because I was about to move to another country and had more urgent things to do.

I wonder how common issues like these are...

I think it's quite common, it's just that people do not notice these things.

I also had it happen one time, the bank eventually figured it out and fixed some error on their part.

  • lores
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You're likely correct it was theft. I was told by a CTO there that topping up accounts with bank money where it has been hacked away was daily routine and cheaper than fixing the systems. Incompetence would not manifest on routine operations like this.
> I had my unfounded suspicion it was some internal subtle theft going on

Had you watched Office Space recently?

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I’m skittish about real banking institutions as well. Vanguard for example outsourced a bunch of their dev work to India a few years ago. Had a friend that worked as a sysadmin for BoA. They were required to keep certain logging for 7 years but he would just delete it anyway when disks were starting to get full.
But the fundamental difference is that the regulatory structures are in place to recover your money if a bank loses it. That's not the case with fintech middlemen. Take the Synapse case:

* End customers are really a customer of Yotta, a (silly IMO) fintech where interest was essentially pooled into a sweepstakes prize.

* Yotta was a customer of Synapse - they used Synapse BaaS APIs to open customer accounts (again, these accounts were really just entries in Synapse's ledger, and they underlying funds were supposed to be stored in an FBO account on Evolve).

* Synapse partnered with Evolve, who is the FDIC insured bank.

Synapse went bankrupt, and Yotta customers are finding out they can't access their money. But the bankruptcy court is at a loss as to really what to do. FDIC isn't getting involved, because as far as they can tell, Evolve hasn't failed. Synapse is basically out of the picture at this point as they are bankrupt and there isn't even enough money left to do a real audit, and Yotta is suing Evolve alleging they lost customer funds. But, in the meantime, Yotta customers are SOL.

If you had a direct relationship with an FDIC-insured bank, and they lost your money, there would be a much clearer path for the FDIC to get involved and make you whole (up to $250k).

There are regulatory structures in place for if your bank goes insolvent. AFAIK there is no regulatory mechanism for "I think The bank owes me $1MM dollars; they think I am not a customer". That's just a lawsuit.
  • itake
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
FDIC would only get involved if the bank was insolvent?

If your bank and you have a disagreement over how much money should be in your account, then FDIC wouldn't be involved?

Banking regulators definitely would get involved, but you'd have to do some research on who you'd complain to first. For example, with national banks, you would first file a complaint with the Office of the Comptroller of the Currency: https://www.consumer-action.org/links/articles/office_of_com...

But, in all cases, there is a clear process to ensure no money goes missing, either through fraud, mistakes or insolvency. Banks require the blessings of their regulators to operate, so they are highly incentivized to ensure all of their accounting is accurate. With fintechs no such regulatory framework exists (at least not yet).

Yotta or its customers don’t have relationship with the bank though .

This case is like FDIC be involved because say Robinhood or stripe or Shopify or any other saas app went bankrupt and their customers are mad they lost money

How about Wealthfront Cash accounts? Wealthfront provides me a statement that shows how my deposited money is distributed among its FDIC insured partner banks, and each transfer they do to and from one of those partner banks. Wealthfront does use a middleman, somewhat similar to how Yotta used Synology as a middleman. But Wealthfront's middleman is FDIC insured: Green Dot Bank.
Wealthfront is a broker iirc, so you have some private insurance to protect you in the event of Wealthfront becoming insolvent.

The difference between them and some bullshit thing like Yotta is you are the customer of record for the account. The bullshit aspect of Wealthfront is they front real services with automated investment services. Yotta was pooling customer funds at some other bullshit fintech who was then putting those funds (or not) into one big account.

Personally, handling cash is an old business and I’m really conservative about who handles mine. Innovation is risk, especially when the money behind it is focused on eliminating accountability. Yotta should have been illegal. Keep accounting boring.

Wealthfront has multiple offerings. Wealthfront investment accounts are for stocks. Wealthfront Cash accounts are for cash. I was talking about Wealthfront Cash accounts, which don't have automated investment services, and I don't think involve a broker.
Some are better than others at bookkeeping, however FDIC only insures against risks of the bank they regulate. They don’t regulate the risks at the fintech co and they don’t insure it .

There is always residual risk between the bank and you with the fintech company. That’s what got Yotta in trouble ,they basically outsourced the heavy lifting of managing ledgers to synapse which you as customer have no control over.

For most people that risk is not worth losing their already modest savings over , that is why banks are regulated and FDIC exists after all.

  • tw04
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> They were required to keep certain logging for 7 years but he would just delete it anyway when disks were starting to get full.

I’m highly skeptical of this claim. Every bank I’ve worked with adheres to their records requirements like it’s life or death (because it kind of is for the bank).

Tell your friend he’s exposing himself to hard prison time if he’s not just making up a story. If his boss tells him that they don’t have budget to retain the logs he should be whistle blowing, not violating federal banking laws to save what is a rounding error in their IT budget.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Coincidentally written something about this yesterday [1], but the gist of my take summed up is that the nature of accounting oriented data models doesn’t help when dealing with multiple FBO accounts.

The main problem is that accounting defaults to full fungibility of monetary amounts represented in a ledger, which has the effect of losing track of the precise mapping between assets and liabilities, so you end up in a position where you simply cannot tell precisely to a bank _who_ are the actual customers they owe money to.

[1] https://www.formance.com/blog/engineering/warehousing-promis...

I like Cory Doctorow's saying: "When you hear the term 'fintech,' think 'unlicensed bank.'
"Fintechs also often put out the fake promise that deposits are FDIC insured, but this only protects you if the underlying bank goes belly up, not if the fintech loses track of your money".

Would you count Wealthfront as a fintech? I was finding their marketing compelling, but this thread makes me think twice.

There is a pretty fundamental difference, and it’s that Wealthfront (and M1, Robinhood, Fidelity, etc) are registered broker-dealers. Broker-dealers are regulated just as stringently as banks, but by the SEC and FINRA as opposed to the Fed. Broker-dealers have been running passthrough FDIC programs for decades, and in a lot of ways have more stringent regulations than banks. The most notable is that they are forced to segregate assets (can’t put client assets on the balance sheet, they have to custody them separately), which is the ultimate way banks fail and need the FDIC to bail them out. Source: used to work in broker-dealer auditing
  • dmoy
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Yes, it's the same basic principle going on at Wealthfront/etc.

It's possible (probable?) that they have better accounting controls. But I personally wouldn't keep anything above SIPC limits at Wealthfront (or any near competitor like Robinhood, M1, etc). And I'd be keeping records on my own.

And I'd make peace with the fact that SIPC resolution is a completely different ballgame from FDIC turnaround for assets held directly at an insured bank (which is like single business day don't-even-notice fast). I.e. not use it as the sole source of emergency funds, have months of expenses at a different institution, etc.

  • dmoy
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> same basic principle

Well yes and no - synapses pass-through-banking wasn't covered by SIPC, and neither would wealthfronts comparable product. But keeping it just in a standard Wealthfront (or synapse even) sweep account with no underlying banking shenanigans happening, is different from SIPC's perspective.

Just keeping stocks (up to $500k) or sweep (up to $250k) at a SIPC broker is probably okay, even if it's a new fintech. Fooling around with their weird passthrough stuff, less so.

In addition to a ledger, fintechs need a reconciliation system to ensure the ledger is correct. Does the card processor audit files match your ledger? Does your ACH and check processing systems match the ledger? What about external money movements at the sponsor bank. Are they recording in the ledger?
  • ajuc
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> In Synapse's case, their ledger said the total amount of all of their individual customer balances ended up being much more than the actual funds held in the underlying FBO accounts. Lots of folks are assuming fraud but I'm willing to put money that it was just a shitty, buggy ledger.

Bugs are as likely to show more and less money than there really are. But bugs in production will almost always show more :)

  • e40
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I had been debating the merits of using Flourish, but I'm sticking with SNOXX on Schwab. Same rate and I think SNOXX has to be safer, right? Even with the Flourish FDIC guarantee, as others have pointed out, it's only for the underlying back not Flourish itself.
In Synapse's case, their ledger said the total amount of all of their individual customer balances ended up being much more than the actual funds held in the underlying FBO accounts. Lots of folks are assuming fraud but I'm willing to put money that it was just a shitty, buggy ledger.

If there was no malfeasance then no money would be gone. The totals would add up, they just wouldn’t no know who was owed what. Since the totals don’t add up, someone got the extra.

    > Fintechs also often put out the fake promise that deposits are FDIC insured
Does this still happen?
Many fintechs are not licensed to hold funds and work with bank partners who hold your actual funds. That allows them to say they're insured because they're not co-mingled with the corporate funds in the event of insolvency. This doesn't stop them from making accounting errors.
  • zie
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The FDIC said you can't do this anymore starting Jan 1, 2025. So I expect it to stop in about 30 days. The FDIC will probably find a few laggards and throw some fines at them, and the process will then probably completely stop.
the problem is the discrepancy between what the Fintech means when they say fdic insured, and what the customer hears when they're told fdic insured. the customer (erroneously) assumes it means that if the Fintech or anyone else has problems, the customer is covered up to the 250k fdic limit. what the Fintech means, is that there's someone they're partnered with that is a bank and is fdic covered. How the money is deposited into the bank is up for interperation. if there Fintech is being dishonest, they have one bank account at a bank, and all of the customers money goes into that one shared account, they're not technically lying - the money is fdic insured. unfortunately for the customers, that's not the same as each of them being fdic insured is the Fintech goes under. fdic doesn't seem to want to clarify this issue either, which is a problem.
The fact that practically all funding most of the world runs on these days is just a bunch of variables in some shitty program never stops being weird to think about. All it takes to create or destroy trillions is one (or maybe a few) CPU ops.

It really stretches the belief into fiat money to the absolute limit.

Dade: This is every financial transaction Ellingson conducts, yeah? From million dollar deals to the ten bucks some guy pays for gas.

Kate: The worm eats a few cents from each transaction.

Dade: And no one's caught it because the money isn't really gone. It's just data being shifted around.

> Lots of folks are assuming fraud but I'm willing to put money that it was just a shitty, buggy ledger.

I'm not sure there's much difference. Intent only matters so much.

I mean... Fraud is defined by intent.

You can argue negligence over mistake. But fraud definitely requires intent.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I guess my point is it's as harmful as fraud regardless if we can throw someone in prison.
Oh, for sure. But treatment/remediation will heavily change between the cases. Right?
> In Synapse's case, their ledger said the total amount of all of their individual customer balances ended up being much more than the actual funds held in the underlying FBO accounts.

When the banks do this it's called "fractional reserve banking", and they sell it as a good thing. :)

I’m constantly amazed by how much the crypto community thinks they understand fractional reserve banking while getting it so completely wrong.

In fractional reserve banking, money that is loaned out is accounted for as liabilities. These liabilities subtract from the overall balance stored (reserved) at the bank. The bank is not printing money new money, no matter how many times this idea gets repeated by people who are, ironically, pumping crypto coins that were printed out of thin air.

I think it’s incredible that cryptocurrencies were literally manifested out of bits, but the same people try to criticize banks for doing this same thing (which they don’t).

> The bank is not printing money new money, no matter how many times this idea gets repeated by people who are, ironically, pumping crypto coins that were printed out of thin air.

It is now widely accepted that bank lending produces new money[1][2]

[1] https://www.bankofengland.co.uk/-/media/boe/files/quarterly-...

[2] https://www.youtube.com/watch?v=K3lP3BhvnSo

There's an inordinate amount of nonsense being espoused in this thread, when the answer is in that first link. I can only assume it's the miseducation that economics textbooks perpetuate.
Yes. That’s how it was taught to me years ago, that’s how it’s understood in the banking industry as well, incl the private sector.

From a large eurozone bank : https://group.bnpparibas/en/news/money-creation-work

The "liabilities" aren't subtracted from the deposit amount when counted as M1 supply. (Actually loans are accounted for as assets and deposits are liabilities, but that's beside the point).

If customer A deposits $100 in cash, and customer B borrows $100 from the bank and deposits it back in the bank, M1 goes up because there are now two checking accounts with $100 in it. That the bank's internal bookkeeping balances doesn't change the fact that the Fed considers that more money exists.

> That the bank's internal bookkeeping balances doesn't change the fact that the Fed considers that more money exists.

The Fed considers that more M1 exists and the same amount of M0 exists. Both are considered monetary aggregates, but M0 is the "money" the bank needs to worry about to stay solvent, and it can't "print" that.

Whilst it's semantically correct to refer to both M1 and M0 as money, it's pretty clear that it's wrong for people people to elide the two to insinuate that banks are printing themselves balances out of thin air like token issuers or insolvent companies that screwed up their customer balance calculations, which is what the OP was covering.

And the Fed wouldn't consider more money to exist if the bank's internal bookkeeping didn't balance...

I agree. The main point is that if B knows that they don't have to repay the $100 until 10 years in the future, then for the 10 next years everyone can pretend there are $200 in total.
> In fractional reserve banking, money that is loaned out is accounted for as liabilities.

Yes, that is how a fractional reserve banking works. But that is not how the current banking system works.

* https://www.stlouisfed.org/publications/page-one-economics/2...

* https://www.pragcap.com/r-i-p-the-money-multiplier/

Banks do not lend out deposits. This was called the "Old View" by Tobin in 1963:

* https://elischolar.library.yale.edu/cowles-discussion-paper-...

The Bank of England has a good explainer on how money is created:

* https://www.bankofengland.co.uk/quarterly-bulletin/2014/q1/m...

See also Cullen Roche:

* https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1905625

* https://rationalreminder.ca/podcast/132

Money that is loaned out is still accounted for as liabilities.

Sure, those liabilities are accounted for in an eventually consistent matter by reconciling imbalances on interbank lending markets at the end of the day with the government topping up any systemic shortfall rather than by counting out deposit coins in the vault

But that's fundamentally much closer to the "Old Money" view than to the OP's claim about fractional reserve being like an FBO inflating customer deposits by failing to track trades properly. All the credit extended by the bank is accounted for, and all of it that isn't backed by reserves is backed by the bank's obligations to someone else.

> Money that is loaned out is still accounted for as liabilities.

To be clear:

* Money is "loaned out" in the sense that a bank credits your account.

* Money is not loaned out in the sense that which "goes into" your account did not come out of someone else's account. Rather it was created 'out of thin air' by the bank without reference to anyone else's deposits.

To be clear:

I am familiar with your links, for quite some time actually.

I never said that the money came out of someone else's account.

What I did say was that it was accounted for as liabilities. It's the bank's liability to the loanee (or their bank), which the bank absolutely can be obliged to pay with reserves or cold hard cash (and it can only get these from borrowing, selling assets or customers paying cash into their account).

And so banks lend it out to people attached to a slightly larger liability repayable to them and keep track, because if they don't all this money they're "printing" represents losses in terms of obligations they can't "print" their way out of. That's quite different from the ledger screwup its being compared with, or indeed people creating tokens (not backed by debt or anything else) out of thin air to sell to other people

You're going to have to show the balance sheet movements because your wordy description is very woolly.
No. Banks classify outstanding loans as assets, not liabilities.
Yes, the loan is the bank's asset. The deposit created aka "the money" is the bank's liability. I don't think we're in disagreement here.

A corollary of this is that contra popular suggestions otherwise, the accounts net to zero and the bank obtains no gain from "printing money", only from interest earned on repayments.

This is a good explanation, I've had to explain this topic a few times as well, it seems like it's one of those topics that is very missunderstood.

To just expand a bit, I believe some of the confusion around printing of money comes from the way some economics reports are built. As a micro example, Assume a 10% required reserve, If Alice deposits $100 and the bank lends $90 to Bob. Alice ($100 deposits) + Bob ($90 cash) think they have $190 in total.

This is mainly useful for economists to understand, study, and report on. However, when the reports get distributed to the public, it looks like the banks printed their own money, as we now see $190 on the report when there is only $100 of cash in our example system.

Whether the system should work on a fractional reserve is it's own debate, but we need to know what it is to debate the merits and risks of the system.

And how does that work when the 'required reserve' is zero as it is now, and has been in the rest of the world since time immemorial?

Nobody deposits in a bank - it's just a retag of an existing deposit. The bank Debits a loan account with the amount owed, and Credits a deposit account with the advance. It's a simple balance sheet expansion in double-entry bookkeeping.

I'm really not sure why this myth persists given that central banks debunked the concept over a decade ago.

Loans create deposits, and those deposits are then converted into bank capital when a deposit holder buys bank capital bonds or equity.

[0]: https://www.bankofengland.co.uk/-/media/boe/files/quarterly-...

Most people deposit in a bank by transferring from another bank. There is more than one bank.
Now do the balance sheet journals for such a transfer. [0]

Then you'll see that for a bank to transfer to another bank the destination bank has to take over the deposit in the source bank (or swap that liability with another bank somewhere).

You have an infinite regress in your thinking.

[0]: https://new-wayland.com/blog/why-banks-pay-interest-on-depos...

In fractional reserve banking, the total deposits at a bank can be greater than the amount of physical money it holds. Since the rest of society is willing to accept bank deposits as an alternative to physical money, this is a form of printing money. Physical currency is not printed, but bank deposit currency (which is money, by de facto agreement) is.
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
>These liabilities subtract from the overall balance stored (reserved) at the bank. The bank is not printing money new money

Hi, this is factually incorrect and you should educate yourself before attempting any further condescending comments on Hacker News.

I just want the gold standard back.

It worked as an actual check on money supply and went implemented properly was harder to manipulate

  • dumah
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The US Government formerly fixed gold prices by statute and prohibited US citizens from owning or trading gold anywhere around the world.

The idea such a system could function in todays world is strange to me.

First of all, I take offense to being thrown in as part of the crypto community, with which I have nothing to do, and for which I do not have much hope.

So now if you are unhappy with the monetary system you are automatically a crypto bro and can be dismissed?

Secondly, the problem with fractional reserve banking is as follows: Suppose Larry makes a deposit of one dollar, which the bank guarantees can be retrieved at any time. The bank loans this dollar to Catherine, which uses it to buy something from Steve. Now Steve has one dollar, which he deposits with the bank. The bank lends this dollar to Catherine2, which uses it to buy something from Steve2. And so on, up to CatherineN and SteveN

Now, in so far as transactions can take place in the economy with bank IOUs, which are considered perfect money substitutes, the amount of money in the economy has been multiplied by a factor of N. Where before only Peter had a dollar (or a dollar IOU, which are supposedly the same), now Pere AND Steve, Steve2, up to SteveN all have a dollar IOU. This leads to an inflationary pressure.

Now it is true that upon the Catherine's repaying of the debt, these extra dollars will go away. However, in reality there is no such thing as negative dollars. The supply of money has been increased by the bank.

An objection could be raised that Catherine's extra demand for money to pay off her debt will exactly offset the extra supply of money. This is nonsense! Everyone demands money all the time. If Catherine did not demand money to pay off her loan, she would demand money in order to satisfy her next most urgent want which could be satisfied by money. The increase in the demand for money is negligible.

  • nly
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Your explanation of fractional reserve banking is somewhat correct, but missing the big picture

Licensed banks can and do write loans at any time without having any deposits to 'lend out'. In doing so they create both the loan (an asset) and a deposit (a liability) simultaneously from thin air. The books immediately balance.

The deposit created is then paid to the borrower and the liability vanishes. The bank is left with only the asset - the one that they created from thin air.

For short term liquidity a bank can always use the overnight lending facility at the central bank. Doing so just makes all their loans far less profitable as this is at a floating daily rate.

In reality the limit to which the money supply grows is not dictated by 'fractional reserves', but solely by interest rate policy and the commercial viability of being able to make loans and demand in the economy.

Not quite. The deposit is paid to the borrower as an advance, and the deposit is transferred to the payee (or the receiving bank if the payee is at another bank)

The liability can never vanish - balance sheets have to balance. Bank liabilities are what we call 'money'. Hence how you are 'in credit' at the bank.

And when we look at the bank assets which back those liabilities, we find that (say) 10% are government-printed money, and the remaining 90% were created by banks.
We don't. What we see is both of those are loans made.

Technically the commercial bank lends to the central bank. That's why they receive interest on it.

That's just a loan like all the other loans on the asset side. The difference is that the interest rate is set by the borrower not the lender.

Holding a deposit is just different name for a particular type of loan.

Not really:

The loan will be accounted to loan book and deposit book on the local(!) banking system level; if the money moves out of the bank, it has to go through central banking money circle - on this level, the loan amount is _NOT_ created, this account can be "filled" only with incoming transactions from other banks (customer deposits!) Thats the reason why a bank needs deposists: to make payments possible, since the number on the central banking account is always smaller than the number of all loans on the local banking system level.

  • nly
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You might want to read this paper from the Bank of England

https://www.bankofengland.co.uk/-/media/boe/files/quarterly-...

Choice quote from page 1:

"Money creation in practice differs from some popular misconceptions — banks do not act simply as intermediaries, lending out deposits that savers place with them, and nor do they ‘multiply up’ central bank money to create new loans and deposit"

The part you are talking about is illustrated in Figure 2.

The transfer of central bank reserves between banks doesn't change the fact that once a loan is written new money enters circulation.

Your mistake was saying Synapse merely did what banks do. Banks don't lose track of money when they increase the money supply.
My comment was meant as a tounge-in-cheek joke, with a dig at the banking system. It was not meant as a serious equivocation between what Synapse did and what banks do.
But Banks are increasing the money supply with fractional reserve bank. But that is of course on purpose and account for by the govt.
The bank IS printing new money. You are ignoring the money multiplier effect where the money lent by bank 1 is deposited into bank 2, bank 2 lends 90% of that deposit, which is deposited into bank 3, ... repeating the process over and over.

With a 10% reserve requirement, a 1,000,000 USD deposit will result in up to 10 times that much money being lent out.

The formula is 1/r, where r is the reserve requirement.

  • neffy
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
That´s not correct unfortunately, although it has been widely taught in economics text books, and you can blame Keynes for that. Keynes used that example to try and explain the process to parliament, and also to argue that the system didn't expand the deposit money supply over time. Ironically even the data (in the Macmillan report) he supplied contradicted him. It´s confusing as well, because the fundamental rules have changed over time.

Banks can lend up to an allowed multiple of their cash or equivalent reserves (gold standard regulation), and in the Basel era are also regulated on the ratio of their capital reserves to their loans. This acts to stop hyperflationary expansion, but there is a feedback loop between new deposits and new capital so the system does still expand slowly over time. This may be beneficial.

In engineering terms, Banks statistically multiplex asset cash with liability deposits, using the asset cash to solve FLP consensus issues that arise when deposits are transferred between banks. It´s actually quite an elegant system.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
>Banks can lend up to an allowed multiple of their cash or equivalent reserves

And what is the current reserve requirement in the US? Zero.

https://www.federalreserve.gov/monetarypolicy/reservereq.htm

Edit: Whoops, someone beat be to it below.

The important part is:

> and in the Basel era are also regulated on the ratio of their capital reserves to their loans

Reducing the reserve ratio to zero doesn't mean that banks can create unlimited amounts of money out of thin air. It just means that regulation by capital requirements has now fully superseded regulation by reserve ratio.

In theory those capital requirements are a better and finer-grained regulatory tool, capturing the different risk of different classes of asset. In practice that can fail--for example, the SVB collapsed insolvent because it was permitted to value bonds above their fair market value if it claimed they'd be held to maturity. That failure was in the details though, not the general concept.

interestingly, the Fed's page on Reserve Requirements states:

    As announced on March 15, 2020, the Board reduced reserve requirement ratios to zero percent effective March 26, 2020.  This action eliminated reserve requirements for all depository institutions.
So in effect, the multiplier is infinity.

https://www.federalreserve.gov/monetarypolicy/reservereq.htm

I remember this. Have they ever rolled it back?
  • neffy
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The Basel Capital rules dominate at the moment. If that ever gets rolled back... buy gold immediately.
Oh ok. So there’s a difference between reserve requirements and capital requirements. Capital requirements are still in place Basel III (Basel Capital Rules) 4.5% among other requirements https://en.m.wikipedia.org/wiki/Basel_III
Basel III also specifies liquidity requirements, which basically means banks need to hold sufficient loan assets in the government or central bank (generally bonds or reserves respectively), which act as a backstop if other banks stop lending to them (i.e. stop accepting deposit transfers without also being given an equivalent asset).
not really infinity:

there are tons of balance-sheet-metrics which have to be aligned, in theory you are right; in practice, there are a lot differences.

Clarifying question:

So for every $1 deposited, I can lend $0.90 but must hold $0.10 as my reserve?

It’s a bit more complicated than that.

At the point I make a loan, 2 things happen on my balance sheet: I have a new liability to you (the increased balance in your account), and I have a new asset (the loan that you’re expected to pay back). They cancel each other out and it therefore seems as if I’m creating money out of thin air.

However, the moment you actually use that money (eg to buy something), the money leaves the bank (unless the other account is also at this bank, but let’s keep it simple). Liabilities on the balance sheet shrink, so assets need to follow. That needs to come from reserves because the loan asset keeps its original value.

The reserve comes from the bank, not from you. Added layer here: Banks can borrow money from each other or central banks if their cash reserves runs low.

Finally: it tends to be the case that the limit on lending is not the reserves, but on the capital constraints. Banks need to retain capital for each loan they make. This is weighed against the risk of these loans. For example: you could lend a lot more in mortgages than in business loans without collateral. Ask your favorite LLM to explain RWAs and Basel III for more.

> However, the moment you actually use that money (eg to buy something), the money leaves the bank (unless the other account is also at this bank, but let’s keep it simple). Liabilities on the balance sheet shrink, so assets need to follow. That needs to come from reserves because the loan asset keeps its original value.

"Everything should be made as simple as possible but no simpler."

You're omitting the thing that causes the money to be created out of thin air. If the other account is at the same bank, now that customer has money in their account that didn't previously exist. And the same thing happens even if the money goes to a customer at another bank -- then that bank's customer has money in their account that didn't previously exist. Even if some reserves are transferred from one bank to another, the total reserves across the whole banking system haven't changed, but the total amount of money on deposit has. And the transfers into and out of the average bank are going to net to zero.

The created money gets destroyed when the loan is paid back, but the total amount of debt generally increases over time so the amount of debt-created money goes up over time as banks make new loans faster than borrowers pay them back.

Not correct:

Your loan is loan+interest; when your loan is created, we do not create the interest-part on it - the interest-part is the rest that you have to pull from someone else, since bank gives you loan X - but asks loan X + interest Y back from you -> thats the reason why there needs to be another fool somewhere else who is then again taking a loan.

its one of the main architectural choices of our money architecture :-D

Only the loan principal is destroyed when it's paid back. The interest goes to the bank, which then gets to spend it or distribute it to shareholders who then get to spend it etc.

Suppose a mechanic takes out a mortgage to buy a house. The bank uses the interest on the loan to pay part of the bank manager's salary. Then the bank manager pays the mechanic to fix his car. Nobody inherently has to take out another loan for the borrower to pay back the bank.

The main reason debt keeps going up is that housing prices keep getting less and less affordable, requiring people to take on more and more debt to buy a house or pay rent.

>> The reserve comes from the bank, not from you. Added layer here: Banks can borrow money from each other or central banks if their cash reserves runs low. << Not correct: it can be both - on your central banking account, you can receive money from other banks (lending on the "interbanking market") _or_ customers (when they send money to your bank).
The bank could also sell the loan instead of borrowing if they are in need of capital.
> So for every $1 deposited, I can lend $0.90 but must hold $0.10 as my reserve?

The GP is completely wrong on how modern finance works. Banks do not lend out deposits. This was called the "Old View" by Tobin in 1963:

* https://elischolar.library.yale.edu/cowles-discussion-paper-...

The Bank of England has a good explainer on how money is created:

* https://www.bankofengland.co.uk/quarterly-bulletin/2014/q1/m...

See also Cullen Roche:

* https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1905625

* https://rationalreminder.ca/podcast/132

Partially untrue:

Yes, they do not need customer deposits to create loans and increase their balance sheet, there are just some guys like you and me, putting the amount in the balance sheet and clicking save (simply put)

But yes, they need to have at least some customer deposists to make payments happen, since if they do not have any deposits, their central banking account would be empty, therefore none of your loans could actually leave your bank since the transaction wont happen. (i'm talking from perspective of TARGET2 / ECB / EURO system)

That is exactly what happens. Reserve ratio used to be 10%, same as your example. The reserve ratio is currently zero, lowered in 2020 during pandemics. But banks still can't lend out more than deposits.
> The reserve ratio is currently zero, lowered in 2020 during pandemics.

I saw this during the pandemic, and it bewildered me how little coverage of it there was. How is this not going to cause another financial catastrophe? And if we're so sure it isn't, then what makes people think they under economics so well, given that they clearly thought a minimum was necessary just a few years ago?

> I saw this during the pandemic, and it bewildered me how little coverage of it there was. How is this not going to cause another financial catastrophe?

The banks in Australia, Canada, etc have had zero reserve requirements for thirty years:

* https://en.wikipedia.org/wiki/Reserve_requirement#Countries_...

The US had reserve requirements leading up to the 2008 GFC which started off with mortgages/loans, and yet those requirement didn't stop the disaster. Canada et al did not have requirements, and yet it didn't have a financial meltdown (not itself, only as 'collateral damage' to what happened in the US).

Many central banks like the Bank of England don't even have a reserve requirement and rely on the bank rate to control it instead.

The equivalent for the USA would be the Federal Funds Rate, I suppose. The reserve requirement is just one tool among many.

Because what matters is _Capital_ requirements, which differ by the _risk_ of the loan. A bank's Capital is what limits their ability to lend. Reserve requirements are irrelevant in the modern banking system.
Technicaly not accurate:

the do lend out more than they have _currently_ as deposits on their central banking accounts, you have to care about "duration transformation" - JPM has billions of loans and deposits, though most of the deposits may be currently "out of the house" (borrowed) Now, for sure could JPM increase the balance sheet even more by another loan, if they still meet whatever balance-sheet-restrictions and if they have enough money on their central banking account. (sure, if the loan is for a customer within the same institution, then there is no difference)

Deposits≥Loans is a tautology since every time loans increase, so do deposits. It doesn't mean anything or provide any insight.
Even deposit liabilities matched by deposit assets in other banks are essentially inter-bank loans. That is, deposits=loans in all cases.
I deposit $20. Deposits: $20. Loans: $0. Cash: $20.
Fortunately, loans create deposits, so they are always in balance.
There's more to it than that; balances are exceeded by the sum of "assets held by the bank" and "assets owed to the bank".
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
@serbuvlad: “When the banks do this it's called "fractional reserve banking", and they sell it as a good thing. :)”

How dare you criticize our holy banking system /s

  • masto
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
One of the things it took a little time to wrap my head around when I started working at Google was trading off reliability or correctness for scaling.

I had previously built things like billing systems and small-scale OLTP web applications, where the concept of even asking the question of whether there could be any acceptable data loss or a nonzero error rate hadn't occurred to me. It was eye-opening to see, not so much that if you're doing millions of qps, some are going to fail, but more the difference in engineering attitude. Right this instant, there are probably thousands of people who opened their Gmail and it didn't load right or gave them a 500 error. Nobody is chasing down why that happened, because those users will just hit reload and go on with their day. Or from another perspective, if your storage has an impressive 99.99999% durability over a year, when you have two billion customers, 200 people had a really miserable day.

It was a jarring transition from investigating every error in the logs to getting used to everything being a little bit broken all the time and always considering the cost before trying to do something about it.

Durability levels that poor aren’t state of the art any more.

The rule of thumb I’ve seen at most places (running at similar scale) is to target one data loss, fleet wide, per century.

That usually increases costs by << 10%, but you have to have someone that understands combinatorics design your data placement algorithms.

The copyset paper is a good place to start if you need to understand that stuff.

That sounds a lot better than some number of nines standing by itself.

99.(9)x percent durability is almost meaningless without a description of what the unit of data is, what a loss looks like. There's too many orders of magnitude between a chunky file having an error, a transaction having an error, a block having an error, a bit having an error...

This all makes a lot of sense, but I have seen this a lot in the opposite direction. Specifically, people from social media companies with very squishy ideas around failures coming into financial applications. Generally does not go well.
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
While I was at Google I was loaned out to the Android team to work on contact synchronizing. There was a problem that I was running into, but it was an extremely rare situation. I pulled aggregated data from production, and it looked like it would affect 0.01% of people. When I presented the solution and mentioned this error rate I was asked how those 200,000 Android users could remediate the situation. They couldn't, their contact sync would just be broken, so I was told to go back to the drawing board.

But yeah, it was the number that was humbling.

There are definitely areas where 4 nines are good enough, but there are just as many areas where the aren't.

This is the type of thing where it helps to hire the right people from the start. You hire a bunch of leetcode experts, but never stop to ask if, besides conjuring data structures and algos in their minds they actually can build the thing you want to build. If your people know how to build the thing, you don't need to sacrifice growth, it gets built right from the start.

Sometimes you need engineers that have some other type of education, maybe accounting, maybe finance, maybe biology. I always thought that the most important part of my career was understanding every industry I built for, deeply, and knowing experts in those areas so you could ask the really important questions. That is problem solving and engineering. The rest is programming/coding.

I have never come across a post on HN that was so scarily describing my current day to day and with a comment I agree with so wholeheartedly.

I’ve spent the majority of my career in tech with a finance angle to it, mostly sales and use tax compliance.

What I never fully appreciated was how much those accountants, controllers, and lawyers were rubbing off on me.

I was recently advising a pretty old startup on their ledgering system and was beyond appalled at what it looks like when a bunch of engineers with no finance or accounting background build an accounting system.

We don’t have to find magical accountant engineers either, it will wholly suffice if we sit actual accountants in with our engineering team during the design process.

After my design of their complete overhaul I had a friend who is a CPA completely review my work, we found a few holes in certain scenarios, but by and large we were good.

Money is a difficult engineering problem because with money comes all the human goofery that surrounds it.

This. Ledgers are a domain knowledge. Anything else should be built on top, including high load optimizations.
In some circles, there is the irritating tendency to believe that technology can solve every problem. Experts are eschewed because innovation is valued above all else.
  • tomgp
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
This reinforces for me the importance of domain knowledge in engineering leadership. i.e. if you work for a finance company you need to have a decent understanding of finance in order to make the correct technical decisions and tradeoffs, same goes for journalism, same goes for commerce etc. etc. Successful organisations I've worked for have always included domain specific, non-technical, questions to tech team interviews whilst some of the most technically acomplished teams I've worked with have floundered through a lack of domain insight.
  • sotix
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I’m a software engineer and a CPA, so I’ve got the domain knowledge. However, I’m not sure how to find a gig where I can apply that. It seems everywhere I look vastly prefers someone with twice my experience as a software engineer and zero domain knowledge over my years of experience in accounting and fewer years of experience as a software engineer. Any insight on how to utilize it effectively?
Unfortunately, the likely solution to this is to keep banging out work at wherever you can gain experience as a software engineer.

Modern hiring practices are too rigid and filter-friendly for you to likely appear as a strong candidate based on the fact you have good accounting experience on top of your growing software skills.

What will really help you though, is having friends who work at a bank in the software departments. It's almost always who you know in this world. You need to network, network, network!

You know, it's kind of funny, but it seems like most businesses are not interested in doing things "right". As doing things "right" is time consuming, hard, and cuts off avenues to "innovate". "That's not how accounting works" is like telling management to clean their room. No one wants to hear it.
As someone who worked on a bookkeeping system without being an accountant (or bookkeeper in any sense), I'd say your challenge is that it's very possible to learn enough to build a decent system, assuming your engineering knowledge is strong.

I don't say this to blow my own trumpet, only to say that the non-engineering leadership at the company in question were very invested in the product details and making sure I understood basic accounting principles.

That said, I went from that role to working in a billing system that was originally built by a team where leadership weren't invested in the details. The result was a lot of issues and frustration from Finance leadership. Storing monetary values in float was not the biggest issue, either.

That being said, maybe branch out of just looking at accounting/bookkeeping and market yourself as "I know how money works in software systems." Your skills are extremely transferrable and knowing the Finance expectations will make it easier to make good design choices further upstream.

Your ideal job for utilizing both would either be to work for one of the ERP providers or accounting software providers (Oracle, NetSuite, Workday, Xero, etc.) or to launch your own targeting a specific need.
I would network to get over the ATS barrier and focus on smaller companies.
  • prh8
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I work at a company that wants to start fixing their accounting system...what stacks are you familiar with?
  • sotix
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I’m a general backend dev. I’ve used python, rust, and TypeScript to varying degrees along with SQL and noSQL databases. I’m pretty comfortable learning new stacks I haven’t used before.
  • 392
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Claude is familiar with all of them, and can describe each in terms of others without risking company code :-)
I may be able to help, I just sent you an email.
I agree with your sentiment here, but I think it helps to think of those domain specific questions as technical questions, just from a different technical domain. Finance is technical, mechanical engineering is technical, even sports management and sociology have big technical components too. I think having an expansive view of what technical competence is breeds the humility necessary to work across domains.
  • tomgp
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Yes, sure, if it helps you to think in those terms I guess that makes sense. Being a bit reductive perhaps but I think what it comes down to is having an interest in the "why" as much as the "what" and the "how".
ever since i started working for an insurance company, i realized that understanding the insurance industry is far more difficult than understanding the codebases. if the codebase is acting weird, at least i can step through it with a debugger!
Reinsurance is a beast. I spent a good time cracking layers and trying to implement the logic with a team.
I don't know about that -- I've always understood that to be the role of the product manager, to have all the domain knowledge.

It's the PM's job to work with engineering to ensure that the requirements are correct, and that the built product meets those requirements. And in an agile setting, those conversations and verifications are happening every single sprint, so something can't go for too long without being caught.

If you don't have a PM, then sure I guess your engineering team had better have deep domain knowledge. But otherwise, no -- it's not engineering's responsibility. It's products's responsibility.

There should be a system analyst. Every ERP or core banking big name works this way. Requirements have to be processed and then passed for implementation. In an agile setup rinse and repeat. As a side effect the whole team gains the domain knowledge. Or part of it.
> There should be a system analyst.

Exactly, this what we try to advance our tech people to, someone with the combination of domain+systems knowledge.

It results in more independent+correct action which is much more powerful than having to specify significant details to less knowledgeable people.

Pardon me an old story... I never built a double entry accounting system but decades ago I did build a billing system for a internet/telcom startup that grew to a modest 8 figures revenue.

By accident and not knowing any better as a young dev, I ended up building the billing logic from day one, and for better and worse building it in two places in the system (on a consumer-facing billing webpage, and on a separate backed process that generated invoices and charged credit cards.)

It turned out to be remarkably hard to keep them in sync. We were constantly iterating trying to get traction as we burned down our capital, releasing new products and services, new ways of discounting and pricing (per use, per month, first X free, etc), features like masterpayer/subaccounts for corporate accounts, user-assignable cost centers, tax allocation to those cost centers with penny allocation, etc such that new wrinkles and corner cases would keep popping up causing the numbers on my two screens/methods not to match.

Being personally responsible for the billing, I would go over all the invoices by hand for a couple days each month to insure they matched before we charged the credit cards and mailed out printed invoices as a final check to prevent mistakes. There was always/often some new problem I'd find affecting one or a small handful of customers which I would then fix the code before we billed. I never felt good letting go and not doublechecking everything by hand.

I thought about refactoring the billing logic to occur in one place to eliminate these mismatches and my manual crosschecking, but after a lot of thought I realized I wasn't comfortable with a single codebase and liked having two codebases as it helped me catch my own errors. I then just made it easier and easier to run automate and crosschecks between the two. The billing code was a little too gnarly to be proud of, but I was very proud of the outcome in how accurate our billing was, the lack of complaints and many near misses we avoided for many years. I do feel twinges of guilt for the complexity I left my successors but I still don't really regret it.

After that experience, the motivation for double entry bookkeeping has always made a lot of sense to me. I had sort of reinvented it in my own hacky way with double logic billing code to prevent my mistakes from causing problems for my customers...

Billing, or anything else involving money, is so easy to get wrong.

The data team I ended up leading at a previous company, had an unfortunate habit of "losing" money - it wasn't real money being lost in transit elsewhere, but records of something we should charge a customer.

Or if the team wasn't losing revenue, it was double charging, etc. etc.

Took us 3 years of hard work to regain the trust of the business leaders.

No offense but this sounds like a nightmare. It also sounds like you did a fantastic job achieving accuracy despite the complexity of the system. That’s something to be proud of.
This is The Nightmare. Devs building systems they barely understand, complexity leaking all over the place, and someone inheriting that awful job of keeping it running without having made any bad decisions themselves.

Software is full of these systems.

The more I learn about computers/software, the more I think it is a miracle that anything works at all.
There's a large financial incentive to making it work.

There's an infinitely smaller financial incentive to making sure it works well and functions securely. Thus it rarely happens...

This is also very similar to N-version programming :)
Good comment, thanks for sharing
No tests either? If you lose track of enough money every transaction that you can make an example of 'Every $5 purchased resulted in $4.98 in the transaction log' I think your problem is far, far bigger than not having double entry bookkeeping.

Who builds a financial system like that an considers it normal? The compensation is one thing, but you'd flee a service like that with all possible haste.

These guys. They said it themselves. “We could have built it right, but we didn’t.” They chose not to. It was not an accident. They made jokes about “dancing cents”. They did these things because there would never be meaningful consequences for doing them, and they knew it. They move fast, they broke things - money things - and they laughed about it. And now they’re lecturing people as if having willfully made these decisions gives them both moral and technical authority. This is magnificently pompous, startup VC culture nonsense.
Sounds like they refunded anything that went wrong so it's not really as bad as you make it sound.
critically, only when customers reached out. which means tons of people that weren't eagle eyed got defrauded.
No, they refunded when a customer complained. Any customer that didn't notice or didn't complain didn't get what was rightfully theirs. That should be criminal, if it isn't.
Refunds limit damages in a lawsuit, but don’t prevent legal issues.

Especially important when explicitly saying you’ve done these things.

I still didn't understand how they lost the money. Yes, double entry bookkeeping might have helped diagnose it, but how were they losing it?
  • tyre
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Yeah that was my thought as well. Ledgers have tons of benefits but they’re not going to fix your dancing cents problem. You’re going to have bad number on the ledger.

Sure, maybe that points you to the bugs, but so would writing basic tests.

A ledger where you insist that every entry touches exactly two accounts, in a business where transactions regularly involve various types of fees, could easily misplace or misattribute a few cents here and there.

This type of business can also have fun dealing with accruals. One can easily do a bunch of transactions, have them settle, and then get an invoice for the associated fees at variable time in the future.

> where you insist that every entry touches exactly two accounts

A ledger is where every transaction balances to 0. It can involve multiple accounts, but the sum of all transfers between all accounts in a single transaction must sum to 0. This is the property of double entry that actually matters.

Maybe I'm being naive, but this seems to be not too difficult... You have a general ledger in which the invariant that it always balances is properly enforced. The hard bit is scaling and performance. There are policies around fractional transactions, but you should never get mismatched entries.
  • qznc
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I think the trick is to store the movement not the balances. You may cache the balance, but what matters is the journal of money moving from one account to another.
Exactly. Any proper ledger is a transaction log. Balances and turnovers are calculated, not stored. I've seen a lot of attempts to 'cache balances', and this was always the point where anything could go wrong.
A good design principle is worth a 1000 tests.
How do you figure the story is true?
  • watt
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I don't understand why the author chooses to bring up the mantra “make it work, make it right, make it fast“ in a negative light, perhaps he misunderstands where the "make it fast" comes in?

to clarify: "make it right" is the second step, and until you make things work correctly ("right"), that's where the work stops, you have to make the system work soundly. the "make it fast", as in, optimize, comes in only after you have got it right, that all correctness and soundness issues are resolved perfectly. then you can start optimizing it (making it run fast).

it has nothing to do with delivery speed. it has nothing to do with working quickly. it's about optimizing only as a last step.

perhaps the author is lamenting the fact that it is possible for something to sort of "work", but to be so far from being "right" that you can't go back and make it right retroactively, that it has to be "right" from the inception, even before it starts barely working?

I think your last sentence is correct. I think author is saying that you can't make it work and THEN make it right with payment systems.

This is my opinion as well and I've been involved in the audit of a fintech system where auditors had to download EVERYTHING into excel spreadsheets and make the numbers balance before they would sign off on the books. That took a lot of time and money I'm guessing made a difference of at least .1 unicorns in the liquidity even that took place 3 years later.

I think author chose the wrong mantra too!

What happens in fast paced startups is that you ship what essentially is a MVP as soon as possible (i.e. you stop at the "make it work" step) because you need to build your customer base, finances, etc.

A better mantra would've been Facebook's "move fast and break things". But, that only works if you can fix the things later. You wouldn't do it if you're building an aircraft for example.

They said the engineering team followed this mantra, which includes the phrase "make it right". But they didn't. They didn't even bother to find out what they didn't know about fintech before they started building a product.

Given the context in which he used it, I think the misunderstanding you suggest in the first sentence is most likely. Immediately afterward he talks about the time pressure startups face.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Most of the comments here echo what the article is criticising. I can see countless threads of back-and-forth defending single-entry bookkeeping.

Sure, single-entry bookkeeping might be easier and more normalized, but sometimes it is a good idea to just stick with the systems and abstractions that have been developed over centuries.

Just use double-entry bookkeeping unless you definitely need something else. Sure, it might be icky for the programmer inside you, but I think you'll be thankful if you ever need to get actual accountants involved to sort out a mismatch.

On a related note: does anybody know of any good resources for programmers in payments and adjacent fields? Something like an "Accounting for Programmers"?

I read an article from Modern Treasury that advocated for mutable pending transactions to vary entry amounts by replacing entries¹, which is just about the worst idea I ever heard in the design of a DE system, and their reasoning boiled down to: if you're running a settlement system but are too lazy to implement a clearinghouse layer separately, no worries, just violate the entire DE covenant instead. So I'd take anything they write with a pinch of salt.

[1] https://www.moderntreasury.com/journal/how-to-scale-a-ledger...

  • btown
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Out of curiosity, how would a clearinghouse layer plug into this in practice? Thinking aloud, would you have an event stream of, say, EntryCreatedEvent, and the clearinghouse would provide streams of of EntryClearedEvent and EntryRejectedEvent - would you join those streams together to derive EffectiveEntry, EffectiveTransaction, PendingEntry, PendingTransaction based on whether all clearing is done on both sides?
I would strongly advise against additional event types because the double-entry model is already an append-only journal. At most, I’d encapsulate the creation of a transaction as a structural formality for whatever event stream/bus is in use. The clearance events produced by the clearinghouse need to be more purposeful and work at a domain level for the logic to have any chance of coherent implementation. So it’s a PaymentsCleared event, note the plural because in many systems this is a batch. This is probably followed by events for the creation of records for the aggregate settlement transfers and their subsequent approvals/lodgement/lifecycle with financial institutions/treasury systems.

The most interesting projections from such an event stream are usually just Balance and PendingBalance. I wouldn’t type entries based on status, it’s just a flag (or more likely a timestamp and reason code), and transaction is not distinguished at all, its status is nominally cleared simply when all the linked entries are cleared.

I consider that first link, Accounting for Computer Scientists, as the canonical guide for computer scientists as to wtf double entry accounting is and why it's the right way to do it.
Kleppmann's Designing Data Intensive Applications is also pure gold.

Worth reading once a year imo

  • sotix
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
A Basic Introduction to Accounting[0].

[0]: https://www.winstoncooke.com/blog/a-basic-introduction-to-ac...

> On a related note: does anybody know of any good resources for programmers in payments and adjacent fields? Something like an "Accounting for Programmers"?

Get a grip on the accounting basics first. I built my own bookkeeping system with double entries and realized the design and the programming was the easy part.

If told people that I was using a database system where one and 100th of the data was missing after every 10 transactions would you seriously take my advice as an engineer blog post like these that seem to be calmly focused toward an advertisement of a person’s or a group’s promotion and that it’s just introducing concepts.
Wat
The user name "zitterbewegung" sounds like a German. They build sentences like that.

You have to think about a sentence like a stack and read it slowly and carefully and take note of the push and pop operations.

Germans are like RPN where everyone else is a regular calculator.

I'm afraid it's not just sentence structure that is the problem here:

> one and 100th of the data was missing

No idea what that means

> an advertisement of a person’s or a group’s promotion

What in god's name is an "advertisement of a group's promotion"?

I was illustrating to the concept if you have a database solution that loses data randomly it would be seen as a joke compared to a dual entry ledger seen here. It feels like they were promoting themselves instead of addressing a real problem since dual entry ledgers have been used before computers in finance.
Did you mean just "one hundredth?" (i.e., 1%)? "One and one hundredth" suggests 101% to a native English speaker.

Edit: ah, you probably spoke "one one hundredth" and got a transcription error.

Transcription error meant to say 1% of all updates to a database (such as an insert) failed.
I read that as "(an advertisement of a person) or (a group's promotion)".
>> one and 100th of the data was missing

>No idea what that means

1/100th of a dollar is a cent - goes towards the "missing cents" glossed over by calling that "dancing cents" in the blog post.

Meant to say 1% or 1/100 of inserts failed.
Yes, but why one and 1/100th?
  • tekla
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Turns out people have different ways of saying the same concept because language is ultimately human and not a mathematical proof.
Yes, I was never confused about that. It's just that I've never heard this particular phrase, or anything like it. In retrospect, it appears to be a transcription error.
I don't know German I was using the voice recognition on an iPad.
For a glimpse at the _real_, human consequences of this kind of slipshod mentality toward money shuffling software see the Post Office scandal[1].

https://en.wikipedia.org/wiki/British_Post_Office_scandal

Anything that moves money should be treated with upmost seriousness and be aware of as many historical mistakes as possible.

> https://en.wikipedia.org/wiki/British_Post_Office_scandal

The four-part miniseries with Toby Jones (mentioned above in §Media:Dramatisation) was really good and goes over things pretty well:

* https://en.wikipedia.org/wiki/Mr_Bates_vs_The_Post_Office

Does anyone have a good explanation for why a ledger database should care about credits and debits and normal balances? This has always seemed more natural to me as a front-end / presentation layer concept. Your ledger entries always sum to zero for each transaction, your income account has a negative balance, and you display it in a sensible manner.

I’m also surprised that this whole article starts by discussing stock trading but has no mention of how to represent stock trades. I assume they are “Sagas” consisting of money moving from the customer to the clearinghouse (or prime broker or PFOF provider or whatever) and shares moving from that provider to the account at which the shares are held. And maybe other associated entries representing fees? It seems to me that this is multi-entry accounting, which is quite common, and that entries don’t actually come in pairs as the article would like us to think.

  • jasim
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
That part of the article felt quite wrong to me as well. I've built accounting systems that worked well for a decade, where internally the values were a single amount column in the journal table. If it was a debit, it'd be positive, if a credit, it'd be negative.

In fact, we could call these values yin and yang, for all it mattered.

Also, I'm not able to really follow what he means by "money = assets in the future".

Money is money, but if you wanted to track the intermediate state until the customer gets receipt, you would use an In Transit account (Good In Transit / Service In Transit etc.)

Yet, it doesn't change the fundamental definition of the value in the accounting system. I think the author confuses an engineering concept (sagas, or thunks, or delayed but introspectable/cancellable actions in general) with accounting.

> Also, I'm not able to really follow what he means by "money = assets in the future".

I’m guessing it’s one of two things:

1. A transaction might fail. If you enter a transaction into your bank’s website or your credit card company’s website, you should probably record it in your ledger right away. But the transaction might get canceled for any number of reasons. And the money will not actually move instantly, at least in the US with some of the slower money moving mechanisms.

2. In stocks and other markets, settlement is not immediate. A trade is actually a promise by the parties to deliver the assets being traded at a specific time or range of times in the future. One probably could model this with “in transit” accounts, but that sounds quite unpleasant.

FWIW, I’ve never really been happy with any way that I’ve seen accounting systems model accruals and things in transit. I’ve seen actual professional accountants thoroughly lose track of balance sheet assets that are worth an exactly known amount of cash but are a little bit intangible in the sense that they’re not in a bank account with a nice monthly statement.

Money never moves instantly : light speed is a limit (and also something can always happen to the message(s).
This isn't really the issue I think, the question is whether money always moves fast enough that you can model them differently— as an atomic object that either exists and is complete or doesn't exits at all. Can you just have the caller wait some milliseconds and either get an error or a success meaning it's done? The answer is of course no but there are plenty of things that can be modeled this way.
IMO it's entirely wrong, and it also makes it a lot more difficult to programmatically create transactions with 3+ legs (For example: A payment with a line item + sales tax).

I think the author is just wrong on that point, but the rest is sound. (Source: I've built bookkeeping software)

There are no 3 leg transactions in a ledger. You described an order. Ledger transactions is one layer deeper. Order creates 2 different transactions: Payment correspond to payments accounts, taxes to the Taxes Payable. That is how classic bookkeeping works.
Sorry what? No, I'm describing a transaction. There is nothing preventing n-leg transactions in a ledger. Neither a physical one, nor a digital one.

They're complicated to balance, so it's not commonly done in physical ledgers for sure, but in digital ledgers it's completely fine. You just have to make sure they do balance.

Orders are not relevant for ledgers. The system I describe is relevant for individual transactions -- for example, a single bank payment that pays for two outstanding invoices at once absolutely SHOULD create a single transaction with three legs: One out of the bank account, two to payables.

A single bank payment for two outstanding invoices to a single entity or two different entities? In the later case there should be different accounts credited. Technically it is possible to account, but a bank should obey the accounting practices and generate multiple transactions on different accounts. The higher level document could be atomic indeed, so it either registers as a batch or not registers at all.
In the former case, yes, not the latter. Real-life scenario: You have a supplier doing both staff outsourcing and material supplies, you have two outstanding invoices in each of those categories, your pending payments in those are tracked separately (for whatever reason), and you do a single bank payment for both.

Anyway, this is just a simple example, but an invoice with VAT on it is IMO the most common example. Or, another one my software support: A bank transaction with embedded banking fees. Some do separate fees, not all. Currency conversion fees are another example.

It is best explained by common scenarios an Italian merchant in the Middle Ages experienced. The basic concept is Assets==Liability (plus Equity). Where positive Assets are entered on the left hand side (debit). And positive Liabilities are entered on the right hand side (credit). In accounting, debit and credit just means left and right.

1. Merchant takes out a loan for $5,000 and receives $5,000 in cash. • Assets (Cash) increase by $5,000 (Debit). • Liabilities (Loan Payable) increase by $5,000 (Credit). • Equity remains unchanged.

2. Merchant buys inventory for $1,000 cash. • Assets (Cash) decrease by $1,000 (Credit). • Assets (Inventory) increase by $1,000 (Debit). • Total assets remain unchanged, and liabilities and equity are unaffected.

3. Merchant sells all inventory for $1,500 cash. • Assets (Cash) increase by $1,500 (Debit). • Assets (Inventory) decrease by $1,000 (Credit) (recording cost of goods sold). • Equity (Retained Earnings) increases by $500 (Credit), representing the profit ($1,500 sales - $1,000 cost).

4. Customer1 deposits $500 in cash for future delivery of goods. • Assets (Cash) increase by $500 (Debit). • Liabilities (Unearned Revenue) increase by $500 (Credit). • Equity remains unchanged.

5. Customer1 transfers half of the future delivery of goods to Customer2. • No changes to assets, liabilities, or equity occur at this point. The merchant’s obligation to deliver goods (reflected as Unearned Revenue) is still $500 but now split between two customers (Customer1 and Customer2). Internal tracking of this obligation may be updated, but the total financial liability remains the same.

Actually it is clear as long as you remember that main point you made: debit and credit just means left and right.

We are all spoiled by thinking of debit/credit as equal to decrease/increase respectively because that how we interpret our bank accounts. That understanding totally collides with formal accounting where debit/credit DON'T mean decrease/increase respectively. I think this is the root cause of all confusion about double-entry accounting. I may be wrong about this, happy to be corrected but that is the bit my brain grinds against when trying to make sense of things.

E.g. I replaced all instance of debit with "Left" and credit with "Right" in your example:

    1. Merchant takes out a loan for $5,000 and receives $5,000 in cash. • Assets (Cash) increase by $5,000 (Left). • Liabilities (Loan Payable) increase by $5,000 (Right). • Equity remains unchanged.

    2. Merchant buys inventory for $1,000 cash. • Assets (Cash) decrease by $1,000 (Right). • Assets (Inventory) increase by $1,000 (Left). • Total assets remain unchanged, and liabilities and equity are unaffected.

    3. Merchant sells all inventory for $1,500 cash. • Assets (Cash) increase by $1,500 (Left). • Assets (Inventory) decrease by $1,000 (Right) (recording cost of goods sold). • Equity (Retained Earnings) increases by $500 (Right), representing the profit ($1,500 sales - $1,000 cost).

    4. Customer1 deposits $500 in cash for future delivery of goods. • Assets (Cash) increase by $500 (Left). • Liabilities (Unearned Revenue) increase by $500 (Right). • Equity remains unchanged.

    5. Customer1 transfers half of the future delivery of goods to Customer2. • No changes to assets, liabilities, or equity occur at this point. The merchant’s obligation to deliver goods (reflected as Unearned Revenue) is still $500 but now split between two customers (Customer1 and Customer2). Internal tracking of this obligation may be updated, but the total financial liability remains the same.

I find this much more easier to reason with.
Yes exactly. With assets liabilities and equity having a left and right entry, they were following the convention when posting a journal entry to the ledger, left entries must equal right entries. (Debits must equal credits). Because A=L+E, we get assets to the left and liabilities to the right.
Appreciate your confirmation. If you have any blog about double entry accounting (or accounting in general), I'll be interested to read it. I have nothing to do with accounting in my professional life but I've always been curious about it.
I like this free site written by a CPA. https://www.accountingverse.com/
I understand this. But we’re talking about computers, not Italian merchants. Italian merchants had actual pieces of paper. Computers have tables and views and frontends that are separate from the tables.

Any self-respecting accounting system should be able to produce a balance sheet that matches the conventions you’re describing. I don’t think it follows that the actual numbers in the database that get summed to produce the total liabilities should be positive.

I've often wondered about this in the shower. Why debits and credits, when we can just make income negative and let everything sum to 0? Then you can track the balance for each account in a single field in the database.

And the answer is that "0" first entered Europe around the time they invented double-entry bookkeeping there. Negative numbers reached Europe centuries after that.

I showed the internals of a number-line-based accounting systems to an accountant once, and he was so confused by the negative incomes.

https://en.wikipedia.org/wiki/Negative_number#History

https://en.wikipedia.org/wiki/Double-entry_bookkeeping#Histo...

I think we are talking about two different things. Yes, of course you can build an accounting system using whatever database algorithm and programming framework you like. But your users expect debits and credits and A=LE or A-L=E because that’s what their auditors expect.

In the scenario four I presented earlier, I believe it is intuitive to think of unearned revenue (liability) as a positive number. When the customer picks up the order, the unearned revenue will be transferred to equity.

  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Thank you for an example. But I don't see how it explains why debit/credit should be used instead of simple signed amount. Like how Transaction(from, to) where `from` and `to` are Entry(account, credit|debit, unsigned amount) make things easier than Entry(account, signed amount).

You basically used different labels for positive or negative amount in the example.

The story I was told and what I believe is that the journal entry is and always is the source of truth. A merchant may have several journals. A separate one for each line of business and maintained by separate clerks. The different journals would then be consolidated into a single ledger. So he can tell what his equity is. When transferring the journal entry to A=L+E. Those early accountants used their version of Excel. For Assets, They took a page and drew a vertical line. For Liabilities, they also drew a vertical line. Same for equity. They called the left side debit and the right side credit. We don’t know why the Italians named it this way. We can only assume the first ledgers dealt with paying down amounts of credit they owed others. Anyways this early “excel” allowed simple ledgers to have two columns. Positive asset changes go to the left and negative to the right. Positive liabilities changes to the right and negative changes to the left. Same thing for equity. I assume this was mantra they told themselves to ensure correctness or reconciliation. When transferring a journal entry to the ledger there must be a debit and a credit or there is fraud. For example an unscrupulous clerk may have taken a loan out. The journal entry may not tell where that money went. When transferring to the ledger, the loan would be entered as a credit. Because the there was not a corresponding debit, either an increase in cash assets or decrease in equity. The balance would have been off and would have told the merchant something was wrong.
Money can't appear and disappear. There should be always a destination. It helps accounting a lot. If someone owes anyone, liabilities increases (active/passive accounts), but this liability is spent somehow, which could become an asset (Goods Purchased using a Loan), so you can track all the chain through account statements.

Destination is the key. You can't just arbitrarily change an account balance using a transaction in a ledger. There should be a destination and this the second record.

This what GAAP, IFRS and even all the Basels for banks describe in strict detail. But every accounting system and practice is based on double entry and not just keeps the balance sheet consistent but adds a meaning to every transaction using predefined types of accounts.

  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I think you missed the fact my model is equivalent to double entry as it understood by financial organisations. The only change is a direction (debit or credit) bit replaced with a sign bit. All other info including accounts to correctly track money flow are still there.
  • adwn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I think that's because double entry bookkeeping precedes the concept of negative numbers. To be more precise, double entry bookkeeping was invented by people who had not yet been introduced to negative numbers.

At least that's how it's been explained to me.

As someone pointed out just below, you do not need two rows. You can have one row: (amount, credit_account, debit_account).
The article does talk about a bit why using negative amounts for this is a bad idea.
yes, agree, I think a 'source' 'destination' model is significantly more straight-forward. Just record the 'source' account and the destination account and you essentially end up with a ledger as a directed graph (Martin Kleppmann wrote a great post on it)

I also wrote a super short post on how to model such a system on postgres https://blog.nxos.io/A-simple-double-entry-ledger-with-sourc...

Blockchain actually kinda nails it, that's in essence a source/destination ledger, no 'postings' or similar needed, and from a balance calculation POV has been working pretty well

One reason this model isn't applied in accounting, in my personal view :), is simply historical and the fact that the number 0 didn't exist when accounting principles were created.

Wrote another post on how to model debit/credits on a source/destination ledger here: https://blog.nxos.io/Debit-and-Credits-on-a-Source-Destinati...

It's very straight-forward, you just have to accept that asset accounts have negative balances and present the absolute amount instead of a negative amount in a view.

It isn't always clear which is "source" and which is "destination" and now you need a bunch of new conventions about these things. Accounting already has these (admittedly arbitrary) conventions so we might as well use those.
> you essentially end up with a ledger as a directed graph

The page contains a comment from Matheus Portela who pointed to a blogpost of his about "Double-Entry Bookkeeping as a Directed Graph" [0].

"I've also had the same problems you described here and double-entry bookkeeping is the way to go for financial accuracy. As a programmer, things clicked when I realized this system is an extended directed graph.". It turned out that: "Hi Matheus! Would you believe me if I told you that I read your post in preparation for this article?"

[0] https://matheusportela.com/double-entry-bookkeeping-as-a-dir...

  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Source/destination seems fail on actual DB implementation. How to sum all entries for a particular account? It could be on either side. It complicates queries and could trigger unoptimal query plans.
An account could be only on one side. The side and chapter is the meaning of this account. Account on the left just negates the increase/decrease direction.

In a bank ledger when a loan appears on a checking account, both increased. Loan on the left, checking on the right. DT Loan → CT Checking. Loan is an asset for a bank, Clients money is a liability.

On a client's balance sheet everything is mirrored. Checking is an asset, Loan is a liability.

Queries are quite simple. Volumes are problematic. In a big institution you would find several ledgers included in the general ledger by totals. You just don't need all the details in one place.

  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
As I understand parent comment it assumes following transaction records: (source_account, dest_account, amount). I argue it complicates things.

You talk more about how to make db data simultaneously a representation of final reports. I believe it's not related to this thread.

You’re right. Missed it while scrolling. Regarding the parent discussion, a classic statement is also two-sided. So, you need to collect sources (debit) and destinations (credit).

It is definitely possible to make each side of transaction a different record, but you have to link them, so there would be another ID for a transaction to group later. It is always there in any case, but you are either joining while getting a balance or grouping while reconstructing transactions for reports. So, it depends on the primary load: lots of transactions to register or lots of reports to provide. Both are used.

okay but how do you model a three-party transaction? say you want to collect fees or taxes
you simple create two records linked by one 'transaction', the source in both cases is the same account, while the destination for one of those postings is the fee account and the other destination is a merchant or similar account. And you can link as many of those postings under a single transaction
okay so now you’ve got double-entry bookkeeping except all of your credits/debits have two dollar values instead of one. let’s call it “quadrupal-entry”
yes, it's a form of double-entry bookkeeping - that's the base. If it weren't double-entry then indeed it would be a pretty poor choice. This design enforces double-entry at a fundamental level, it's never possible to create a record that doesn't affect 2 accounts - and that's actually the whole point of it :)

You have a single USD (or other) value - so in the simplest form it just looks like this:

From: Alice To: Bob Amount: 10 Currency: USD

And the balances are simply sum(transactions where account is receiver) - sum(transactions where account is sender)

You never have 3 party transactions - if you do, you would not be able to track money flow.

You can have multiple transactions. One to pay tax, one to pay fees and one to pay the actual thing.

You bundle these things in another abstraction, eg. An invoice.

  • yobbo
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
In double-entry, a transaction is the tuple (amount, credit_account, debit_account).

In singly-entry, it is the tuple (amount, account).

> In double-entry, a transaction is the tuple (amount, credit_account, debit_account).

Every “double entry” accounting package I’ve ever used can easily handle transactions that are awkward in this schema and transactions that don’t fit at all.

Moving $1 from one current account to another? I guess you declare that the $1 needs to be a positive amount, but your two accounts have the same normal balance, and calling one a “debit account” is a bit awkward.

Adding an accounts payable entry that is split between two expense accounts? Not so easy.

"Debit" is a verb here.
  • eviks
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
it's not "debit account" but "debit an account"
I've never understood that either. As a computer guy it always struck me as a redundancy -- the kind of redundancy where one thing will always eventually be wrong.

I assumed it has to do with the fact that it was invented for bookkeeping by hand. Accountants assure me that it's even more important with computers, but I've never managed to figure out how that works

Double entry accounting is still error prone, but single entry accounting is fraud prone.
  • Aspos
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
A single debit can result in many credits.

A single record can (and will) be lost. Network issue, db issue, etc., some transactions will not make it to the db.

With double entry you at least have a chance of reconciling the lost transaction.

> As a computer guy it always struck me as a redundancy -- the kind of redundancy where one thing will always eventually be wrong.

That's the purpose. If you have a system with no redundancy, it's equally true that something will always eventually be wrong. But in that case, you'll have no way of knowing what's wrong.

With the redundancy, you can detect problems and often determine what happened.

It is not a redundancy. Actually the meaning of a transaction encoded in a pair of accounts. You will need to store it somewhere anyway in a real world scenario, but predefined account pairs is the universal language. The accounting system.
I've been doing this so long that I've finally realized that if someone can't explain why something is the way it is, that means it's wrong, or at least some arbitrary choice among several equally good alternatives.
  • ffsm8
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Being able to explain it is something different then you being able to understand it.

And double entry bookkeeping should be both easy to explain (there are countless articles for it, precisely because it is a pretty easy concept) And easy to understand if you have ever tried to keep a ledger of transactions around and wanted to audit it for errors.

I always get hung up on the different kinds of accounts and their respective definitions of "credit" and "debit". It isn't that much to memorize but it's very counter to the way I understood those terms and it keeps throwing me off.
The simplest way to memorize it is to remember the accounting formula and one simple rule.

- Assets minus Liabilities = Equity (net worth)

- Your bank account or cash balance increases on the debit side

From this you can figure out that if you borrowed money, the debt increases on the credit side and the cash influx debits your bank account. The same goes for an income.

  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
But signed amounts (instead of debit/credit) formula is a way easier.

Sum of entries of assets/liabilities accounts = Equity. Moreover assets and liabilities become one type.

And single entry bookkeeping is even "easier". Doesn't mean it's a good idea.
  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Single entry is not easier. Every time it makes things horribly complex really fast. I don't understand your point.
That is why I sincerely hope this way takes over the debit/credit approach
Surely it credits my bank account. Credits make things go up, no?

Clearly not. But this is why I let an accountant do it.

The bank will tell you there's a credit because to them, it's a credit. Your bank account is a loan from you to them - they owe you that money. When your account goes up, their debt to you goes up... thus it's a credit to them, and a debit to you.
Thanks. That's actually really helpful. Of course every transaction is a credit or debit depending on your point of view.

That's probably not the way I would have designed it. I'd probably have designed it from the point of view of the account, so that we'd all agree on what addition and subtraction mean. But that's my programmery point of view. I imagine that they're more concerned with the flows -- not just the numbers, but especially the actual materials being bought and sold.

It kind of works that way, it’s just confusing that the bank tells you their point of view.

Your bank account is really two accounts: an asset on your books, and a liability on the bank’s books.

When you talk about accounting for physical inventory, that’s a whole new can of worms.

The most popular way I see is this:

- you keep track of goods by their cost to you (not their value once sold)

- every time you buy item xyz, you increase an asset account (perhaps called “stock” and the transaction labeled “xyz”). You also keep track of the number of xyz. Say you buy 10 for $10 each, then another 10 for $20 each. Now you have 20 and your xyz is valued at $300. Average cost: $15

- every time you sell or lose some xyz, you adjust the number of xyz, and reduce the asset account by the average value of those items at the time of the transaction, or $15 in this example. The other account would be cost_of_goods_sold or stock_shrinkage.

Many other approaches also work.

  • ffsm8
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The goal of double entry accounting is to enable an audit of the ledger and catching errors by the bookkeeper.

Think about how you're going to do that with your concept. You will likely end up with something extremely close to what double entry accounting is after a few iterations

  • dkbrk
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
This is a bit late, but I don't see any other answers that provide what I think is the key insight.

The accounting equation is: Assets = Equity + Liabilities.

For a transaction to be valid it needs to keep that equation in balance. Let's say we have two asset accounts A1, A2 and two Liability accounts L1, L2.

A1 + A2 = Equity + L1 + L2

And any of these sorts of transactions would keep it balanced:

(A1 + X) + (A2 - X) = Equity + L1 + L2 [0]

(A1 + X) + A2 = Equity + (L1 + X) + L2 [1]

(A1 - X) + A2 = Equity + (L1 - X) + L2 [2]

A1 + A2 = Equity + (L1 + X) + (L2 - X) [3]

Now, here is the key insight: "Debit" and "Credit" are defined so that a valid transaction consists of the pairing of a debit and credit regardless of whether the halves of the transaction are on the same side of the equation or not. It does this by having them change sign when moved to the other side.

More concretely, debit is positive for assets, credit is positive for liabilities. And then the four transaction examples above are:

[0]: debit X to A1; credit X to A2

[1]: debit X to A1; credit X to L1

[2]: credit X to A1; debit X to L1

[3]: credit X to L1; debit X to L2

You can debit and credit to any arbitrary accounts, and so long as the convention is followed and debits and credits are equal, the accounting equation will remain balanced.

Another way of looking like this is with parity. A transaction consists of an even parity part "debit" and an odd parity part "credit". Moving to the other side of the equation is an odd parity operation and so a credit on the RHS has double odd parity, which means it adds to those accounts (and debit, with odd parity, subtracts).

most accounting software chooses one convention and sticks with it on all account types, to the chagrin of accountants
  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The funny thing with debit/credit wall: only long dead Italian merchchants knew its purpose.
Neither have I. It always seems like a massive cargo culting. Human accountants are liable to make very different kinds of mistakes than computers.
  • neffy
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Double entry book keeping implements an error correction and detection algorithm.
  • mamcx
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> This has always seemed more natural to me as a front-end / presentation layer concept.

Consistence is a property of the backend, if that is wrong there is not hope for later

> Your ledger entries always sum to zero for each transaction, your income account has a negative balance, and you display it in a sensible manner.

'sensible manner' is the problem here. The data/money will be diverge with time, and without proper storage of the data it will by impossible to figure out.

The problem here is NOT store 'a transaction'. That with a RDBMs works. Is to store the FLOW of MANY transactions and the divergent ways things works.

Like, your bank is telling you has $100 and your system $120. And your system sum right, but the bank rules.

Or when you do a return and cents are lost in the interchanges and chargebacks.

Or, just wrong data entry, sync, import/export, etc.

---

The way to see this is that `double entry` is a variation of `inmutable data that don't mutate and always track the flow of it' that is golden for business apps.

  • 2mol
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Great article. I have an observation to the "engineers should know this and do good engineering" though: I work for a payments company and there is a fundamental career problem with becoming the ledger ninja: It's not enough work, and it's eventually done!

I've seen the following major phases of this work: 1) Build the ledger (correctly), and it will work well for a while. 2) Add convenience code for the callers, assist finance in doing reports/journaling, fix some minor bugs, take care of the operational bits (keep the database up). 3) Reach the scaling limits of your initial approach, but there are some obvious (not trivial) things to do: re-implement the transaction creation directly in the database (10x perf gain), maybe sharding, maybe putting old tx into colder storage, etc.

This is spread out over a while, so I haven't seen it be a full-time job, even at real startup-level (+10% MoM) growth. Even if it was, that's one person, not a whole team. I understand engineers that instead are pulled towards projects where they are in higher demand.

In another comment somebody said ledger systems are trivial when done right and super hard when done wrong - so if you did a good job it kinda looks like you just created 3 tables and some code. That seems thankless, and job searching as this type of specialist is harder than just being a generalist.

I suspect this is a "small company" problem which don't sell too many different things. A larger enterprise might have a platform for connecting businesses with businesses and businesses with customers. They might sell services (subscriptions) but also one-time purchases which require different forms of revenue recognition. Those might even be split up across different revenue streams. You end up building sub-ledgers for each one of them because the ERP can't handle the scale. Oh, and you're a public company so make sure everything is SOX compliant and easy to audit. Ah, and you operate on a global scale so do all those things in different currencies for different legal entities.

There's a reason Stripe is as successful as it is. And then there's a world where a company outgrows Stripe.

There are worse career choices ("prompt engineer" LOL) than financial engineering.

Stripe isn't a bookkeeper/accountant. Something like http://Pilot.com is though.

stripe on bookkeeping: https://stripe.com/guides/atlas/bookkeeping-and-accounting

I’ve built my career on cleaning up should-have-used-a-ledger messes. it’s hard but there’s always another company that needs it, and I get bored staying in one place.

recently I discovered that in a medical billing context the system is way, way weirder than I had seen before. I shipped the three tables, but getting everything into it seems like it might be an endless battle

If you are trying to reinvent parts of banking, you could maybe start with what already works as a thought experiment.

Here's an example of a core banking deposit transaction schema that has been extensively battle tested in many small & mid-size US institutions:

https://jackhenry.dev/open-enterprise-api-docs/operational-d...

You may note fields like "Effective Date" & "Affects Balance/Interest", which imply doing this correctly may involve exploring some interesting edge cases. Wouldn't it be cool if you could just cheat and start with an approach that already considers them?

I don't know what the fuck I just read here. I've worked on ledgers for years with massive codebases with millions of loc and I've never seen an error of "a few cents". Feels like I just read a alibi. Don't develop financial software from scratch.
That was my reaction too.

It was like reading an engineering team saying their attempt to design a new lightweight mountain bike failed. It turned out saving weight by omitting brakes wasn't a good idea, and their attempt to fix it by riding beside each bike and manually slowing it down wasn't too popular either. Then they have the hubris to follow that up with, "you can avoid the same mistakes by reading what we have to say on bicycle design".

The lessons they can take away have very little to do with engineering or FinTech. I'd file it under doing a bit of research before committing to any major task. Basic accounting principles are centuries old now. They could have learnt about them by buying a high school text book, reading it cover to cover and use doing the exercises. It would have taken them less than a week.

Admittedly that only gives you the "how", not the "why". You realise much, much later that the engineering equivalent of double entry accounting is two systems design by different teams that are constantly comparing their outputs. By the time you've figured that out, it's caught so many errors you realise "holy shit, these systems are really easy to screw up, and are under constant attack by malicious actors".

There is a hidden trap at every step - floating point being imprecise, currency rounding not being reversible, tax calculations being one way, ACID being a hard requirement. I'm being this mob screwed up tax calculations. Floating point throwing out 1 in 100 million transactions was one of the joys they never got to experience.

  • 392
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The lesson here is not to never develop from scratch. In fact it may be faster and safer to do so than to vet all of the open source junk that modern ByteByteGo-system-designers think they need. TigerBeetle and Jepsen-proof-engineering are paths to dig down here, along with basic knowledge of computer number storage.

The lesson for me here is that there is no substitute for knowing or caring about what you're doing. Dancing cents don't happen when eng cared about correctness in the first place. I feel like I just read a college freshman explaining how there's no possible way they could have passed their first exam. Better yet he's here to sell us on his thought leadership without ever explaining why he failed the first exam.

  • ahi
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I read the intro and decided I wouldn't learn much from someone willing and eager to admit he eats paint chips.
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I've seen it, sighed, fixed it, and ensured the necessary accounting adjustments assigned the loss to the business instead of the customer. Seems that last step, which is the most important one, wasn't followed here, which is the real shame.
To be fair, I have seen it before because of a modification I did to the code base. It was when I first started working with it so I didn't really understand how to correctly implement extensions. But still, i was able to see the error in the ledger and connected to my change.
I had the same thought. I mean, were they actually using ordinary floating-point numbers to represent amounts in their ledger? This sets off so many alarm bells.
It's a suicide to build a finance system or similar without double entry ledger.

Worse, I've worked at where the transaction reference id from vendor is not recorded, it's lost so the past data cannot be reconciled!

is there no individual accountability regime in the US?

in the UK, as an engineer, if I'd built this I would expect the regulator to come after me personally for not ensuring the system had adequate controls to protect clients money/investments

with a potentially unlimited fine + prison time

Has that ever happened? It's incredibly hard to prosecute directors in the UK for obvious malfeasance. I have never heard of a software engineer being sanctioned for crap code.
for engineers[1] it is relatively new, having only been introduced in the last couple of years

[1]: technically those performing a Certified Function

unless the engineer was provably malicious wouldn't it be responsibility of the product owner? Ownership usually entails accountability, they could order proper QA?
It would be nearly impossible to prosecute for just bad code. It would require more and is limited very small scope of people.
>is there no individual accountability regime in the US?

Here in the US, programmers like to call themselves Engineers, forcing everyone else to use the term "Professional Engineer" or "Licensed Engineer" or some other modifier before their title. I hate it, I wish they would stop, but it's not going to happen.

Software here is a wild, wild, West. The motto most live by is "move fast and break things"... even when those things are people's lives.

There’s a few more professionals that are called engineers.

Railroad locomotive operators, ship engine operators.

The name precedes the creation of licensed tertiary education level engineers.

A lot of people seems to ignore the fact that licensed professions that require an accredited diploma in a tertiary level education program is a relatively recent feature of our societies.

  • woah
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The original meaning of engineer was an idle British aristocrat who liked to tinker with things that went whiz and bang in the garden shed. Just a guy who really liked to mess around with engines.
The PE thing is more than 100 years old in the US. By 1947 every state had a PE licensure program. It has nothing to do with programmers.
In fact it was lobbied in order to disinclude software engineering from it's purview in most states.
I leave you with the 2008 financial crisis as exhibit A on exactly how nobody gets in trouble for lack of financial accountability
Not entirely accurate. Exactly one bank (a small Asian American community bank, the 2651st largest in the U.S.) was prosecuted after the subprime mortgage crisis.

https://en.wikipedia.org/wiki/Abacus:_Small_Enough_to_Jail

The secret is to have everyone in on it. Everyone is guilty but nobody is quite culpable enough to punish.

The low level guys were just doing their jobs, and each individual transaction was mostly legal. A few weren't but it's hard to sort out which ones. Maybe the management should be responsible for ensuring that nobody ever did anything illegal, but they can't really supervise everything all the time, can they?

Poof. Guilt is just a rounding error that all rounds down to zero. The government passes some new regulations to prevent that particular scenario from happening again, and the same people set about finding a new scam.

This kind of still works for things that aren't real like money or law because they are societal constructs, but for the cases involving the real world, there's no escape from consequences (only dumping them onto someone else to deal with).
It's also the thing for bigTech to do as well. They break all of the societal norms to make the world they want, then "work" with regulators to make it okay for them to exist yet make it extremely difficult for anyone to follow as they will have to deal with the regulations.

Once you get to the top, the act of pulling up the ladders behind you is "just" self preservation.

they thought about this

there's a separate certification function for managing certification function employees, and they're jointly liable for anything they do

Not for software that I'm aware of.

For certain regulated professions there is, if a building falls down due to a bad design the professional engineer (PE) that signed and sealed the plans can be held personally liable.

I don't see how a rank and file programmer would ever be personally responsible for their code. You can blame management for forcing untested or known flawed logic, but not some shmoe that pushes an "off by 1" bug while working weekends and late nights with no testing and hard deadlines.
Programmers should ensure they understand the requirements of what they are being asked to build, and not just blindly build things. If I was asked to build an accounting system, I would insist on speaking with an accountant to understand the requirements. If I was asked to work on a medical imagining system, I would want to be working with a qualified radiologist and probably a PhD holder in a relevant field too.

Overconfidence can quite literally be fatal.

Pushing a bug, yeah that happens.

Deliberately implementing a financial system that ignores established (and probably legally required) accounting practices? That's kind of like a structural engineer willfully disregarding the building code because that's what management asked for.

Can you please provide a reference to the regulators coming after an engineer in the UK.

Also you say regulations in the UK have been changed recently.

I'm not aware of regulations that apply to software engineers.

Depends if the particular activity is regulated or not.
a stock trading platform, as described in the article?
In North America, “engineer” doesn’t necessarily mean a software engineer with a professional certification. Software developers have taken to calling themselves engineers. Whether engineering professional bodies should start going after people for this or not is a different topic.

But it’s entirely possible for someone who calls themselves an engineer to not actually be a certified engineer. So the activity wouldn’t be regulated because the person isn’t part of a professional body that regulates members.

In that case, lack of competence would be a civil issue unless it resulted in something criminal.

There isn't even a way to get certified as a professional engineer for software in the US.
There was but no one did it, so they stopped: https://ncees.org/ncees-discontinuing-pe-software-engineerin...

It is still possible in the UK and I assume EU (chartered engineer and the EU-alternative).

So the reason it isn’t a PE-discipline is uptake, not the work itself.

it's what you're doing (your "function") that's regulated

not your job title, or piece of paper that you have that says you're X, Y or Z

"Professional Engineer" is a protected title that requires licensing to be used for a discipline. That licensing process does not exist for software in the US right now.
Could well be the entity actually selling the services.
Are you a professionally qualified engineer?
It really irks me that the author assumes I know how double-entry accounting works and doesn't mention a single sentence about it. I read half way through the article and couldn't follow it, except that single-entry is bad and double-entry is good.
A double-entry system is one where you can't change the balance of an account, you can only record transfers between accounts. This means it's impossible to forget to update the other side of a transaction, it's a single step. A consequence of that is you can check that the sum of all accounts is always 0.

In practice you have virtual accounts like "cloud expenses" and "customer subscription" that only go up/down over time, to be the counter-party for transactions in/out of your company. So it's not impossible to mess up, but it eliminates a class of mistakes.

Hm, but what's the second entry? Do you not just add a single entry from X to Y?
Entry1: Account A: receives 1000 from Account B

Entry2: Account B: sends 1000 to Account A

And from GP:

> A consequence of that is you can check that the sum of all accounts is always 0.

Entry1 + Entry2 = 1000 + -1000 = 0

Amusingly, when I made my own personal money tracking program over a decade ago for my bank accounts, this was how I implemented transfers between them just because it was simpler to do. Years later when I heard this name, I also had trouble understanding it because I assumed I had done it the bad way and banks did somehow did something more in-depth.

So basically it's not really the transfer that's the base of the system, but the account? Ie each account gets its own entry, and because there are two parties in each transaction, we naturally get two entries?
There are not "parties" in an DE transaction.

The legal entity or entities involved - if any - would be described in the linked commercial document that explains the transaction, and the chart of accounts that describes the meaning of each account code.

There is no requirement for a transaction to have exactly two entries. The term "double-entry" is slightly misleading; it is only trying to express that both sides of the accounting equation are equal. When recording a sale, for example, it is more likely that three or more entries are journaled, due to the additional coded entries for sales tax, and those produced for separate line items for each SKU, shipping etc.

A better phrase is "two-sided accounting", which is also used, but less commonly than "double-entry".

I see, and would that transaction get four entries (two for the money, two for the tax), or three (two for the money, one for the tax)?
Neither of those cases precisely. The vendor's transaction record for such a sale would include a debit to cash, a credit to revenue, and a credit to sales tax liability.

The total amount of the credits would equal the cash debit.

Our prof repeated always (literally translated to EN):

"For every debit there must be a credit"

or

"Every transaction has two sides"

That’s more than a little reductive but I imagine your professor was hoping that at least the motivating concepts of two-sided accounting would seep in.
I may have the terminology a bit off, but the core table in my implementation represents transactions. The columns are Account, Description, Type, Value, Date. Type is to distinguish transfers between my accounts and actually adding/spending money. SUM(Value) on just the transfers between my accounts adds to 0 just like the example above. SUM(Value) on everything tells me how much money I have across all my accounts.
What you have done is record credit and debit entries in the same column, distinguishing credits and debits by sign. This is a design choice in the data structure for DE systems. It's a tossup whether that's better than either of the alternatives.

In the case of moving money between regular bank accounts in the same institution, you regard that as a movement between two asset accounts, whilst the bank regards that as a movement between two liability accounts.

So their entries would have the same magnitude as yours, but inverted signs.

> It's a tossup whether that's better than either of the alternatives.

Which means you aren't even thinking of the "bad" single-entry version, which is what a lot of people here are stumbling over because apparently it's more natural: A "transfers" table with the columns FromAccount, ToAccount, Amount, were a single row represents both the "Entry" rows from mine above.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The change to X and the change to Y are the two entries.

In practice most systems allow for more than two entries in a single transaction (ex: take from bank account, give to salary, give to taxes) but always at least two, and always adding up to 0.

Could have been clearer but it is there. Here is the relevant section. In short single entry is basically the Account view and Double entry is the full ledger. (Called double because of the hard requirement that all Entries come in pairs.)

> Ledgers are conceptually a data model, represented by three entities: Accounts, Entries and Transactions.

> Most people think of money in terms of what’s theirs, and Accounts are the representation of that point of view. They are the reason why engineers naturally gravitate towards the “simplicity” of single-entry systems. Accounts are both buckets of value, and a particular point of view of how its value changes over time.

> Entries represent the flow of funds between Accounts. Crucially, they are always an exchange of value. Therefore, they always come in pairs: an Entry represents one leg of the exchange.

> The way we ensure that Entries are paired correctly is with Transactions. Ledgers shouldn’t interact with Entries directly, but through the Transaction entity.

That’s more than halfway through the post. The GP’s point that railing for pages about how X is good and Y is bad before defining them stands.
I've been trying to figure out double-entry accounting for years now and still don't get it. Most explanations are along the lines of "Here is a simple explanation that you are guaranteed to understand: <proceeds to describe the procedure>" which lacks intuition on why this is useful. I suspect I would need to do a gig as an accountant and run into some error conditions that double-entry solves to really grok it.

Edit: no offense but sibling comment is an example :P

Double-entry: each row of your DB stores {debit account, credit account, amount}.

Single-entry: each row of your DB stores {account, delta}.

With double-entry you are guaranteed via your schema that sum(debit delta) = sum(credit delta), that's it. Money is "conserved".

It's easy to denormalize the double-entry into a single-entry view.

That kleppman article talking about movements as edges of a DAG is the only place this is ever talked about clearly.

  • tetha
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
From what I've learned from a few guys, double ledger accounting is a technique which optimizes for consistency, error detection and fraud detection. Each movement of money, or material should always be written down in two or possibly more places. Ideally by independent people or systems.

Once you pair this with another entity or company doing the same, it becomes very hard for money or goods to vanish without the possibility to track it down. Either your books are consistent (sum of "stock" + sum of "ingress" - sum of "egress" - sum of "waste" makes sense), or something is weird. Either your incoming or outgoing goods match the other entities incoming or outgoing records, or something is amiss in between.

This is more about anomaly detection, because paying a clerk can be cheaper than losing millions of dollars of material by people unloading it off of a truck en-route.

> Most explanations are along the lines of "Here is a simple explanation that you are guaranteed to understand: <proceeds to describe the procedure>" which lacks intuition on why this is useful

Double entry accounting is useful because it enables local reasoning. Let me explain why! If you've remembered those other explanations, you hopefully remember the fundamental equation of accounting:

    assets - liabilities = shareholder equity
Well, you can also define this a bit more broadly as

    assets + Δassets - liabilities - Δliabilities = equity
In other words, changes in assets or liabilities must also balance. I sometimes think of these as "income" and "expense," probably because GNUcash has accounts of that type. If you rearrange that expanded equation you get

    (assets - liabilities) + (Δassets - Δliabilities) = equity
If the grouping on the left represents the state before a transaction, and the grouping on the right represents a single transaction, then we get a super useful auditing tool: as long as the book is balanced before any given transaction, the book will remain balanced after the transaction as long as the transaction itself is balanced. You can now reason about a narrow set of things instead of the whole book!

In practice what this means is if your system is losing cents on every transaction as OP article states, each transaction should be flagged as imbalanced. From there it should be super easy to see why, since you can examine a single transaction.

To achieve this you need all entries / actions to have a unique transaction ID to group by, and a notion of which accounts are liabilities and which are assets, etc. As OP article mentions, there's a ton of implementation nuance.

Agreed, exactly the same here. I was hoping the article would explain, but sadly, no.
Imagine you’re running a physics simulation. With conservation of energy: if energy shows up somewhere, it went down somewhere else. We shouldn’t assume a balance changed on its own without any other effects in the system. If you start “losing net energy” you can see what’s going on.
> A double-entry system is an accounting method that tracks money at both its source and destination.

There's an entire section on double-entry accounting in the article. The tl;dr is that if you take money out of an account, you need to place money in another account and vice versa. So, you have a table called "accounts receivable" which track money the company is owed. If you actually get paid, you remove money from "accounts receivable" and add money to the "cash" account, instead of just increasing the amount of cash you have.

It makes it much more difficult to lose track of money or have it stolen. In a single-entry system, I could receive money from a customer for services owed and just keep it for myself.

If you're looking for a pre-built double entry accounting Postgresql schema that provides database-level integrity checks:

https://github.com/adamcharnock/django-hordak

I created and maintain this, along with a couple of others. It is built for Django (so great if you're using Django), but extracting the schema wouldn't be too hard. It also has MySQL support, but the integrity checks are more limited.

(Side note: I'm a freelancer and available!)

There is a bigger lesson here. When you are creating something new it pays to understand what came before. Do you need to disgard everything?

I gave a boorish lecture to a junior accountant recently...

When you make things up yourself it is just you and Excel. When you use double entry you have 100s of years of history to fall back on. https://en.m.wikipedia.org/wiki/Double-entry_bookkeeping

Though there's a weirder life lesson here : a lot (most / all ?) of successes come from founders not having enough experience to know just how difficult their enterprise is.

More experienced founders would take a look at it and immediately nope out of it, and so not achieve success (after a grueling amount of work).

Accounting systems are super hard when you do them wrong and kind of trivial when you do it right.

There is no in-between.

Martin fowler wrote quite a bit on the subject and it's a good match for event-sourcing.

literally every startup I’ve worked at has had busted accounting system that had to be painfully and expensively rewritten as a proper ledger. We should be teaching “Accounting for Programmers” as a required course in CS programs, or at least spreading memes about how money data must be append-only and immutable or you are being negligent in ways far more serious than most programming errors
The Software Engineering department next door from my CS department had a mandatory course on accounting. I don't think CS needs accounting, but I did learn two skills that would have helped here: why one should never use floating point for money (which may not apply here, who knows), and how to stick to a formal specification. "Money is neither lost nor created" is the most trivial example one can think for an invariant.

Relatedly, some months ago I asked how to correctly record food in a double-entry bookkeeping system [1] and it triggered a 1300-words, 8-levels-deep discussion. We should remember more often than accounting is a degree by itself.

[1] https://news.ycombinator.com/item?id=39992107

Haha, same. https://news.ycombinator.com/item?id=39994886

I kept digging and digging on a "sell some lemonade for $5" example, and ended up at:

  - $5 debit to cash (asset => debit means +5)
  - $5 credit to revenue (equity => credit means + 5)
  - $X debit to cost of goods sold (liability => debit means - X)
  - $X credit to inventory (asset => credits mean - X)
A double-entry for the money, and a double-entry for the inventory, for a total of 4 entries.

It's too complicated for me. I'd model it as a Sale{lemonade:1,price:$5} and be done with it. Nothing sums to zero and there's no "Equity + Income + Liabilities = Assets + Expenses" in my version.

But this is HN, and I think a lot of people would call my way of doing things "double" because it has both the lemonade and the money in it. So when I say I'm not sold on doing actual double-entry [https://news.ycombinator.com/item?id=42270721] I get sweet down-votes.

if you’re just doing first-party sales, the single-entry model you described is probably fine!

but every startup these days wants to become a marketplace where you are facilitating multiple third-party lemonade vendors and taking a cut for letting them use your platform. In that case, the flow of money quickly gets too hard to understand unless you have double-entry

I think because programmer find it confusing about translation of model in computer to real world

Accounting predates computer by hundreds of years, there are better way to do certain thing of course but we must follow convention here because that's the norm and everyone understood

computers evolved out of tabulating machines and tabulating machines were designed to make pen&paper accounting easier. These are our roots and we should appreciate them
  • jasim
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> Instead of using negative numbers, Accounts have normal balance: normal credit balance literally means that they are normal when its associated entries with type credit have a total amount that outweighs its associated entries with type debit. The reverse is true for normal debit balance.

But that is an interpretation made by the viewer. A customer typically is an asset account, whose balances are in the debit column. But if we somehow owe them money because let's say they paid us an advance, then their balance should be in the credit column. The accounting system need not bother with what the "right" place for each account is.

It is quite practical to have only a simple amount column rather than separate debit/credit columns in a database for journal entries. As long as we follow a consistent pattern in mapping user input (debit = positive, credit = negative) into the underlying tables, and the same when rendering accounting statements back, it would remain consistent and correct.

> It is quite practical to have only a simple amount column rather than separate debit/credit columns in a database for journal entries. As long as we follow a consistent pattern in mapping user input (debit = positive, credit = negative) into the underlying tables, and the same when rendering accounting statements back, it would remain consistent and correct.

Another benefit of Credit / Debit side on double-entry bookkeeping is you need to balance both side in a single transaction. Say if the user account 2003201 is in Credit and it got an addition of 1000 value, a same value need to be added on Debit side. If it's (1) a cash topup, then 1000 value need to be added to Cash account (let's say 1001001) on Debit side. Otherwise if it's a transfer (2) from another user account 203235, then the account need to be Debited 1000 value as well.

It's Asset = Liabilities + Equity, while the left equation is Debit (which increase value when a Debit transaction happen, and the right equation is Credit, which increase when a Credit transaction happen. In (1) case, the cash account increase since it's on Debit account, while in (2) case, the user account decrease because it's a debit transaction on Credit account.

  • jasim
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
With negative/positive, the invariant would be sum(amount) = 0; with your approach, it would be sum(debit-credit)=0. Both are valid, it is just two ways of expressing the same thing.

I think it is useful to think about double-entry book-keeping in two layers. One is the base primitive of the journal - where each transaction has a set of debits and credits to different accounts, which all total to 0.

Then above that there is the chart of accounts, and how real-world transactions are modelled. For an engineer, to build the base primitive, we only need a simple schema for accounts and transactions. You can use either amount (+/-ve), or debit/credit for each line item.

Then if you're building the application layer which creates entries, like your top-up example, then you also need to know how to _structure_ those entries. If you have a transfer between two customer accounts, then you debit the one who's receiving the money (because assets are marked on the debit side) and credit the other (because liabilities are on the credit side). If you receive payment, then cash is debited (due to assets), and the income account is credited (because income balances are on the credit side).

However, all of this has nothing to do with how we structure the fundamental primitive of the journalling system. It is just a list of accounts, and then a list of transactions, where each transaction has a set of accounts that get either debited/credit, with the sum of the entire transaction coming to 0. That's it -- that constraint is all there is to double-entry book-keeping from a schema point.

> When I decided to write a post on ledgers, I already knew there were a few good resources out there to help me.

One that's not referenced in this article and compile all of them is: https://github.com/kdeldycke/awesome-billing#readme

I saw this so many times. It is rather depressing. Even experienced devs hardly know enough about floats, rounding etc to implement these things, but they do. We were asked to troubleshoot a project for an online insurer ; the devs did two conversions; js and php. So the amount from the frontend ended on the server and got converted and got into the db as an entry. Definitely depressing how frontend numbers are converted and actually this means something to the end result. But the php conversion was also wrong, so it was often cents off. I believe people should never be allowed to have a 'senior developer' job again, but this is really quite commonplace. Most companies don't have enough revenue to notice or care, but given how often we see it, I would say it's actually normal.
While I've never worked on financial systems myself, I was told many times that monetary amounts should always be stored and manipulated as integer amounts of smallest units of currency (cents). One reason being that integer math is always exact, while floating-point isn't. And when money is involved, you want your math to be exact so everything always adds up every step of the way. So I'm always calling it out when someone is using floats to store money.
well, you could do this, but it would make world much more complex when working with the numbers in the code - todays highlevel languages like Java or C# have a "decimal" datatype which should be used for money/accounting, these types also offer several digits of accuracy, if required.
[dead]
Everyone keeps focusing on double entry book keeping, but that's a ledger that's more suited to manual book keeping. We're in the computer age, people should be using the richer accounting model of REA:

https://en.wikipedia.org/wiki/Resources%2C_Events%2C_Agents

You can see that this model has all of the features discussed in the article, and then some, and REA events map naturally to something like event sourcing. You can project a REA dataset into a double entry ledger, but you often can't go the other way around.

I read the source you sited but it seems like a foundational part of the model is basically double book accounting:

“At the heart of each REA model there is usually a pair of events, linked by an exchange relationship, typically referred to as the "duality" relation. One of these events usually represents a resource being given away or lost, while the other represents a resource being received or gained.”

This is what I see as the main difference between single book accounting and double book accounting, with REA having some OO things added to the model to more accurately represent business objects when a computer is used. What am I missing about REA that makes it better than double book as implemented in the way this post was talking about implementing it?

Double entry bookkeeping (DEB) records entries in ledgers. There are no ledgers in REA, therefore REA is not double-entry bookkeeping.

You could argue that when representing both DEB and REA in an entity-relational model, they might have some similar looking tuples and relations, but that does not entail that they have the same data model. As I said in my initial post, REA is a richer data model that captures more information. You can reproduce the ledgers of DEB from a REA system, but you cannot go the other way in all cases.

Can you expand on why? Or link some more detailed blog post or writing so I can dig into it? I’m interested
The Wikipedia page has a good set of links for details, but basically REA is an ontology. Ontologies describe things that actually exist, so REA records real things that actually happened with a set of real economic actors.

DEB is a fictitious abstract model. "Accounts" and "ledgers" aren't real, they are fictions, artifacts of a model we use to indirectly track events. DEB doesn't even have a notion of economic actors that take part in an exchange. As such, it breaks down with multiparty transactions, for instance. DEB can of course be extended to handle such notions, but it's no longer just DEB and starts encoding something more like REA, just within a less meaningful foundation, eg. there is no such thing as a "normal balance" because this need results from a fictitious accounting model.

The article also mixes concerns that are not actually part of accounting but of different ontologies, eg. pending->discarded|posted is recording what may AND did happen, accounting is only supposed to record what actually happened. Which isn't to say that tracking what may happen isn't necessary, but mixing it into your accounting records is dubious, and simply muddies what's supposed to be a simple and reliable system that records what actually happened.

Just look at the sample REA pattern involving a cashier, customer and sales person. The information that this exchange involves 3 parties is not even captured in DEB. This is why I said you can reconstruct DEB from REA because REA is richer, but not the other way around.

Thank you
  • mjul
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Pavel Hruby’s book, Model-Driven Design Using Business Patterns (Springer Verlag, Berlin, 2006) is to my knowledge the best book on this topic. It’s a pattern book so the format is a bit dry but the contents are great. I keep coming back to it and have implemented it several times over the years.
Awesome! Thanks for the help
[dead]
I feel there is no excuse. Startups cut all sorts of corners but why? Get rich at other people's expense?

The tech founder should know their shit prior to building it so that time and runway and deadlines are no excuse. Really if you are doing fintech you should have employment experience and understand all the operations well.

Otherwise they are no better than say home builders who mismanage and go bankrupt. Or less geneously, they are conmen.

> Instead of using negative numbers, Accounts have normal balance: normal credit balance literally means that they are normal when its associated entries with type credit have a total amount that outweighs its associated entries with type debit. The reverse is true for normal debit balance.

I didn't understand this part, can someone give examples of good and bad approaches for both credit and debit accounts?

Could someone explain something fundamental to me about the need for double-entry accounting? Why can't your source of truth instead be proper SQL tables following these schemas (using Python tuple/list notation for convenience)?

  transactions: [
    (txID, timestamp, [
      (accountID, delta, otherInfo),
      ...
    ], reason),
    ...
  ]

  accounts: [
    (accountID, routingNumber, accountNumber, ownerID),
    ...
  ]
Crucially, notice that "accounts" don't track balances here, and so I'm asking: why would I need TWO entries per transaction, and why would I need to do my own tracking of the balance of every account, when I can just keep a single entry per transaction in a proper ACID database and then build any view I need on top (such as running balances) with a battle-tested projection mechanism (like a SQL View) so that I still track a single source of truth?
what you’ve sketched out already looks like a double-entry bookkeeping system, if you have the constraint that all of the deltas in your array on your transaction sum to zero (so money always _moves_ and is not created or destroyed)

calling it “double” might be misleading - in the ledger systems I’ve worked on, a transaction that has only two entries - a single source and a single destination - is actually rare. In practice, a payment tends to have a source, the intended destination, a secondary destination for platform fees, possibly another destination for taxes.

And much more complicated situations are possible - I currently work in medical billing where there are multiple virtual internal accounts involved in each transaction for different kinds of revenue (including, for example, money we billed for but don’t expect will actually get paid)

so a transaction becomes a bundle of a set of several debits or credits that happen simultaneously and sum to zero. if you have that, you have double-entry bookkeeping, even if you schema puts it all in one table

Ahh thanks so much, that clears it up for me! Every description of double entry I'd seen so far seemed to have the notion of keeping a running balance for each account (and then having a separate transaction for each one where they all sum to zero) as a central feature, so I thought avoiding that would make it no longer double-entry, and it always puzzled me why that was so crucial. Your comment made me see that wasn't actually the point!
  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You transaction includes multiple entries. Your schema is multi-entry by definition :P

It's funny how many commentators here confuse debit/credit with double-entry.

  • pkd
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
SQL or not doesn't really matter. Double entry provides redundancy, and redundancy allows for auditing. It's like checksumming to detect errors.
I guess I'm asking: what would go wrong with the design in my comment, which makes transactions first-class and has a single entry for each one? What errors would double entry prevent that this doesn't?
You offered no design. Db schema has nothing to do with transaction guarantees.
  • j45
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Startups could improve in 2 wasy:

Perhaps popular culture should review the use of nosql databases, and then spending tremendous effort to try and make it into a relational database, while nosql databases can be setup to be eventually consistent.

Money, and accurate accounting don't work too well when the inputs to a calculation of a balance are eventually consistent.

So much work is avoided at times it seems to not learn SQL that it can rival or end up being more work once NOSQL leads you down the path of inevitable relational needs.

In addition to this, maybe it's time to build honeypot ledgers with a prize and reward in it for anyone who can hack or undermine it, similar to vulnerability bounties. Run for long enough, it would determine at least that one side of security mistakes in startups being reduced. Run a sprint, with prizes and let them run like daily deal fantasy and watch things harden.

  • bsder
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Losing cents is because somebody didn't use Decimal for currency. And that's just flat out malfeasance--possibly even regulatory malfeasance.

Double entry is irrelevant, here.

I love the "use crypto" to fix the problem suggestion. LOL!

It's kind of poignant since crypto has given me a fantastic club to beat executives over the head with whenever they want to do stupid handling of money. Since crypto has so many decimal places, you break dumbass floating point handling of money immediately and irretrievably for way more than just "a couple cents". Your CFO starts jumping up and down really excitedly about handling money properly when he has to worry about a rounding error forcing him to compensate a Bitcoin price jump/crash.

It could be rounding issues as well (roundings do not all cancel out), or having to divide an even amount unevenly (how do you pay $10 in three "equal" installments?) Even with infinite decimal precision there will be special handling involved.
The payments industry is rife with nonsense like this.

At a fintech startup I was working with, we built a "shadow ledger" because we couldn't trust a 3rd party PP to give us the correct information, which would otherwise allow double spending on accounts.

We tried 3 different (major!) PPs - they ALL had similar flaws.

It's just like it was with pen and paper... Everyone trusts what is written on the financial statement but when they come to withdraw the cash, all at once, suddenly everyone finds out that the gold isn't there.

It's the same with computer systems. The charts show something, but until enough people decide to all withdraw their money or sell their stock at the same time, nobody has any idea that the money or asset simply isn't there or nobody knows just how frothy the valuation is.

Social media and search algorithms are highly optimized to ensure that people don't sell or withdraw stuff at the same time. Modern media directs massive attention towards certain topics as a way to draw attention away from other topics which could collapse the economy.

Also, imagine a bank has a serious bug which causes millions or billions of dollars to disappear every year or creates extra illegitimate dollars. Imagine they only discover this bug after a few years of operation... How likely is it that they will report it to an authority? They didn't notice it for years, why not pretend they didn't notice it for a few MORE years... The incentive to delay the reckoning is an extremely powerful one.

Precisely. Designing a transactional system can be solved. Designed a transactional system that properly entangles the bits with the assets they represent is the hard part.
Hey, great works

I literally build the same system and ask for HN Ask but no one answer it and already using double entry and ledger based on some research and AI advice lol

I implement all the check its mention in your page and quite satisfied using postgress constraint and trigger function to detect abnormality before bad row is inserted

but the problem now is populate the database using fake data because how rigirous my constraint and rule are, I need to simulate real world use case and duplicate it on development machine but this are hard

if you ever read this, can you give some advice please, because simply looping that function is not gonna work because constraint of double entry and ledger

A related pet peeve of mine is UIs that round off your investment holdings or cash.

Sometimes to 2 digits. Sometimes 3. Or 4. How many significant digits are there even in fractional SPY holdings? You just resort to looking in all the places (statements, transaction history, overview dashboard, …) and going with the one that shows the most digits, and assume that's all of them.

Big and small companies do this. And when I've reported it, they don't care. They don't see the problem.

  • zerop
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Building fintech platforms are not easy, specially for large systems which not only require robustness, scale, but also traceability, daily reconciliations. We even had features to undo and redo (correct and replay) transactions from a time in history.

I did not like finance much, but building a fintech system really teaches a lot, not only from technology perspective but from managing stakeholders, processes, compliance, dealing with all kinds of finance-specific issues.

One should always work in fintech, at some point of time in career.

  • taneq
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Once upon a time I worked for a company that made accounting software. While there I had to explain to the other devs:

1) That being able to escape characters in CSV files is kind of important. (Some customers noticed that a single quote in a transaction description would make the importer silently fail to import the rest of the transactions... :S ), and

2) Why it wasn't good to store all dollar values as doubles. I'm reasonably sure I quoted Superman 3 at some point during this discussion.

One of my $previousjob's managed to invent their own kind-of sort-of double entry accounting system, but not quite. So whenever anything went sideways (and we interacted with Stripe, so something always went sideways), it required a lot of manual intervention with changing of amounts in various tables.

I pushed for offsetting entries instead, but was overruled. It was a _nightmare_.

Thankfully, it is not $previousjob.

If the author is reading this: Please stop hiding the scroll bar. It's user hostile.

I want to see if I have time to finish this article before I have to leave. And now I have to waste time copy-pasting the contents into a text file, just to see where I am.

Edit: Actually, it's invisible only in Firefox. Chrome shows the scrollbar still. So this may be a bug in the author's CSS or something.

I think “make it work, make it right, make it fast” is a solid approach.

It appears the startup in question just never even did step one. A financial system that is imprecise does not work.

Additionally that mantra is typically applied at the story/issue level in my experience as in:

1. Get the tests passing 2. Get the code clean 3. Identify scaling bottlenecks and remedy if necessary.

At $currentJob, we have decided to stick with Fineract for ledger maintenance and that helped us scale without having to deal with the nitty gritty details of accounting. If you don’t have the necessary expertise around you, offload that responsibility to a third party system that is maintained by those who do.
  • bvrmn
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I don't understand what the difference in modeling between:

Entry(account, direction, non-negative amount), direction is debit or credit.

vs

Entry(account, signed amount), + is debit, - is credit (for example).

It's a two way mapping and should be equivalent. Unless debit or credit amount could be negative. But as I understand it's a big NO-NO in accounting.

I guess the lesson here is to not bank with an a16z company. I have been using Wise for years. Very very happy.
> It’s just that…it goes without saying that fintech companies should know better

haha, if only.

move fast, break everything if it gives profit

The number of engineers I've met that realized only too late that, by definition, floats and doubles are absolutely *not good* to deal with money is unbelievable. At this point it should be universal knowledge, but it isn't.
nit: double entry accounting only goes back to the 13th century, not thousands of years.
Fibonacci, if I remember correctly.
It took me a while to realise what double-entry bookkeeping actually was, then one day it hit me: it's literally just the flow of money through a system. Understand that, and you understand the cornerstone of accountancy.
if only accountancy 'learned' from OOP and just had one mutable balance for each account :)
Blockchains are examples worth learning from about how to build ledgers. Append-only transactions which start as pending and then eventually confirmed. Balances are cumsums of confirmed transactions.
> Not losing track of money is the bare minimum for fintech companies.

The last fintech I worked for had a joke about how you weren't really an employee until your code had lost some money.

Feeling vindicated for the double entry transaction system we built at clearvoice.com for our two-sided marketplace, leveraging the fantastic DoubleEntry Ruby Gem from Envato.
As an SWE, I've seen how the sausage is made and is the reason why I'm very very careful when using fintechs for anything.
This was many years ago. I has an injurie that prevented me from driving, so I asked if there was any project on the backburner I could do on my own in 3 weeks without too much interaction (this was long before easy remote work infra).

Sure there was. A financial services company wanted to replace their repayment plan generator that ran on an aging AS/400 with someting running in .Net on Windows.

I dug in an learned all about time value of money, numerical formats and precision, rounding strategies, day counting strategies for incomplete periods (did you know there are many dozens of those) etc.

I made everyting as configurable as I could so we had the option to offer to other financial service clients by just setting up a different profile.

Since I had no interaction with the client before the first presentation meeting, I put in what to me felt like the most plausible config (I had managed projects in the domain before so I was not completely clueless to the mindset).

We had the meeting. I showed a few calculated plans which they compared on the spot to the AS/400 output. I deployed on a test VM for them so they could do more extensive testing. Code was accepted with 0 change requests and put into production shortly thereafter. Don't think they ever changed from the default settings.

An incredibly depressing thing in our profession is how we collectively lack memory.

We routinely re-discover stuff that probably was already solved by a quiet lady writing a CICS transaction in a s-360 system in 1969.

Yeah, dealing with money and especially others people money without double entry bookkeeping is a bad practice.

But this particular problem is the consequence of the choice of using floating point binary math to deal with monetary quantities.

Given the fact that most modern languages have very awkward support for arbitrary precision decimal numbers the most sensible way to deal with money usually boils down to store it as an integer.

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I don't see how these are related (except of course for the errors in the second that could be caught by the first), and hopefully nobody in fintech is so ignorant/dumb to use floating point for money ?

Or are we talking about random scammers as 'fintech' now rather than banks (the fintech of which might indeed be old enough to be still running on COBOL-like systems) ?

Hot take: I blame the culture of obsession with CS topics, leetcode, tail recursion, dependent types etc and not enough focus on solid engineering and domain knowledge in areas encountered by 90% of real life jobs.

Starts in the education and perpetrates via hiring, blogosphere and programmer celebrities.

Wait, why can’t I just use a Postgres database with the Decimal type and call it a day?
  • zie
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You need to keep state, so that when something goes wrong you can figure out that something went wrong.

PG's rounding can be summed up as: we round to the nearest digit you specify, but how we get there is mostly undefined.

Every bank and organization you do business with(especially if you cross-jurisdictions) will likely round pennies differently. PG and the decimal data type(which is actually the numeric data type) are only part of the solution, as they won't handle rounding pennies correctly for you, but they will store it, if you round properly yourself.

PG also has a money data type, but it also doesn't let you specify the rounding rules, so you have to be careful. Also, money is tied to lc_monetary, which is handled outside of PG's control, so you have to be very careful with it, when moving data across PG instances, or you might be surprised. Also, it doesn't let you specify which currency this money is, which means you have to store that yourself, if you ever have more than one currency to care about.

If you don't care about balancing to the penny with your external organizations(banks, CC processors, etc), you are 100% guaranteed to have a bad time eventually.

  • p4bl0
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Interesting article, thanks for sharing! I believe the things it explains are very important, and not only in fintech. A few years ago, even in the case of trying to keep track of a small organisation spendings and incomes in a spreadsheet I had some of the problems that are mentioned in the article and had to completely rethink and remake the spreadsheet in order to make it actually work.

But apart from the technical points, another interesting thing in the article is its introduction and what it says there about money as a concept. Yes, money is debt, both philosophically and technically it is the right way to think about it. And I believe that's something the crypto-assets industry and enthusiasts as a whole fundamentally gets wrong (mostly because of the libertarian political point of view blockchain tech has been designed with).

I'm not sold on double entry here.

If a new transaction enters the system now, I could follow the advice and record it as two sources of truth. Or I could just record it once.

If I could turn a single transaction into two ledger entries today, I could do the same later if I needed to.

the systems I’ve worked on that use your design end up in a state I call “the zoo” - because you never actually have simple transactions like you’re imagining. You end up with one object for credit card payments with stripe fees, another one for purchases with taxes, another one for ACH payouts, all meaning different things

double entry is actually the simplest projection of any possible billing system because anything that moves money can be described as a bundle of credits and debits that happen simultaneously and sum to zero. so you get accounts for taxes collected and fees charged automatically and don’t have to write custom logic for each thing

  • zie
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You are thinking it's duplicating work, but it isn't. You are simply recording the whole transaction in a double entry system. i.e. you know where the money came from and where it went. That's all a double entry system does, records both sides of the transaction.

This means when something goes wrong, you have a chance to figure out what in the world happened.

That's it. If you want to think of it as a state machine, you are storing all the state that happened with the resource you are tracking(usually money). This way when some part of the resource is missing, you can go figure out what happened.

Think about it another way: always record enough of the transaction, that if you were required to sit in a courtroom in front of a jury and explain what happened, you could do so. If you can't do that, you didn't store enough information.

  • cts1
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
There's no redundancy. Imagine you have 2 buckets, you reach into bucket A and grab some cash and put it in bucket B. $x dollars leaving bucket A is entry part 1, $x dollars entering bucket B is entry part 2. That's all it is. I honestly don't understand what single entry is by comparison - it sounds like losing an essential attribute of the transaction.
"Two sources of truth" is an anti pattern in computing and isn't redundancy. Use replication or backups for that.

You can't imagine simpler than what you described because it's single entry.

Double-entry is twice as complicated, makes sense only to accountants and not to computing people. Your example of 1 transaction would be doubly-kept as some nonsense like

  BucketA revenue:$x CR cash:$x CR
  BucketB revenue:$x DB cash:$x DB
no. single entry is when you track BucketA’s balance without regard to other buckets at all
Single would be recording the two accounts, one being the source account and the other the destination, and a quantity.
single entry is like a receipt at a store, is says what you owed and what you paid, but there’s no “other side” that is getting credited or debited the same amount
  • codr7
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Keeping entries separate makes it easier to deal with data for specific accounts, the transaction will contain both of them.
Learning and using the ledger-cli accounting tool taught me a lot about this. It's incredible how messy seemingly simple things can be and how much trouble a bunch of cents can cause. Seems to be the accounting version of off-by-one errors. It is very tempting to just write them off as losses and forget about them forever.

Rounding in particular is a truly endless source of trouble and has caused me to chase after a lot of cents. Dividing up a large payment into multiple installments is the major cause of rounding in my use case. Life starts to suck the second things fail to be evenly divisible. I created an account to track gains and losses due to rounding and over time and it's adding up to quite the chunk of change.

Hilariously, the payment systems would charge me incorrect rounded up amounts and then they would refund the difference to my credit card at some undefined time in the future. Tracking and correlating all these seemingly random one or two cent transactions has got to be one of the most annoying activities I've ever learned to put up with. Not only do I have to figure out why things aren't quite adding up, I have to patch things up in the future when they fix the mistake.

Why can't these things just charge the correct amounts? For example, imagine splitting up $55.53 into four installments. That's 4x$13.8825. They could charge me 3x$13.88 + 1x$14.89. Instead they round it up to $55.56 and charge me 4x$13.89, then maybe they refund me $0.03 some unknown day in the future. It's like the systems go out of their way to be as annoying as possible. Some systems do this silly single cent refund dance in my credit card statement even though they print the exact same 3xN + 1xN+0.01 solution in the receipt. Makes absolutely no sense to me.

It's getting to the point I'm trying to avoid this nonsense by structuring purchases so the final price is evenly divisible by some common factors. I never liked the .99 cents manipulation trick but I seriously hate it now.

Im sure there is a point in the article but ive never seen a dancing cent nor can i imagine one. My numbers are all strings "5" is "5" forever. If one somehow ends up storing it as 4.99 why would the other entry be correct?
Your second sentence tells us your first sentence was a lie. You can clearly imagine one which is why you specified which data type you use for money. You know a floating point issue is an issue.

Now let's say your price is "0.00023" per unit and someone uses "213.34" units. Can you imagine it now?

Question was about double entry. Does recording it twice solve floating point issues?
Yes -- or rather it is designed to identify the issue that floating point causes.

The main idea behind double ledger accounting is that if you add a long list of the same numbers twice (as totally independent operations), then if you have added both lists correctly, you the two independent results will be the same. If you made at least one mistake in adding either or both of the lists, then it is possible, but unlikely that the results will match.

It's easy to think that computers don't make mistakes like humans when adding numbers, however floating point addition for long sequences of numbers is non deterministic (and error prone if you don't sort the numbers first and start with the small ones).

While double ledger systems won't fix this problem, they will identify if you have a problem with your addition (particularly a non-deterministic one like you would find with floating point addition) when you go to reconcile your books and find that the numbers across the various accounts don't add up.

> Does recording it twice

Double entry is (confusingly) not about recording it twice, it's about using a transaction model where the state of N accounts has to be changed in compensating directions within a transaction for it to be valid, N being >= 2.

So depending on how your transaction schema is defined, a double-entry transaction can be written without ever even repeating the amount, e.g.

    {"debit": "cash:main", "credit": "giftcards:1234", "amount": 100}
Making it effectively impossible to represent invalid state change. Things get trickier when N > 2 as classical double-entry tend to _not_ directly relate tuples of accounts directly to one-another and instead relying on a aggregated balancing of the changes to the N accounts at the transaction level, though YMMV between different ledgering systems.
There was no question, that I was answering. Simple someone claiming they have no idea how it would even be possible for a $5 end up being $4.98, while literally stating they know about floating point issues.
This is more a problem with ill-specified contracts though. It was a constant source of annoyance when I was sole-trading for a bit, because what happened was something like this:

I'd be quoted a day-rate. That was what I was actually going to get paid, one day. But then I'd be told to bill it as an hourly rate. And then actually to bill it as 7.5 hours.

But I wasn't told what the hourly was - the hourly was whatever my day rate was, divided by 7.5. So this led to the problem that it produced an irrational number as a result.

Technically this should've been fine...except no one I dealt with knew or cared about this concept - they all used Excel. So if I rounded the irrational to nearest upper cent (since that's the smallest unit which could be paid) they complained it didn't add up. If I added a "correction item" to track summing up partial cents, they complained it wasn't part of the hourly.

In the end I just send python decimal.Decimal to maximum precision, flowed through invoices with like 8 digits of precision on the hourly rate, and this seemed to make Excel happy enough. Of course it was completely useless for tracking purposes - i.e. no one would ever be able to pay or unpay 0.666666666667 cents.

Because what's not in employment contracts that really should be? Any discussion on how numbers are to be rounded in the event of uneven division. You just get to sort of guess what accounting may or may not be doing. In my case of course it didn't matter - no one was ever going to hold me to anything other the day rate, just for some reason they wanted to input tiny fractions of a cent which they actually couldn't track.

And it's not an idle problem either: i.e. in the case of rounding when it comes to wages, should it be against the employee? It's fractions of a cent in practice, but we're not going to define it at all?

The example of the price and units is actually a real-world example. If you look at how much you pay for electricity you'll see you're paying something like 0.321 per KwH and you're not billed in full units.

Your issue is just people being lazy and forcing a day rate into an hourly employment system.

> the hourly was whatever my day rate was, divided by 7.5. So this led to the problem that it produced an irrational number as a result.

The only way for this to be true is if your day rate was irrational to begin with.

Irrational is the wrong word, it was a .3 or .6 repeater or something similar. Same effect: pile in digits so excel would round it off correctly back to the original rate I was quoted.
Assuming your day rate was a multiple of 10, it can only have been a .3 or .6 repeater (or an integer), because dividing a multiple of 5 by 7.5 gives you an integer number of thirds.
I mean i want to understand not that the problem doesnt exist.

If the price is "0.00023" per unit and someone uses "213.34" units I feed those strings into a multiplication function that returns the correct string 100% of the time.

That much i understand. I dont get how that category of problems is addressed by the solution described.

What i also understand is that you inevtably get to deal with accountants or other finance people. Working in a format they understand is not optional. They will have you transform whatever you have into that anyway.

I learn not to wonder why but maybe i should.

> If the price is "0.00023" per unit and someone uses "213.34" units I feed those strings into a multiplication function that returns the correct string 100% of the time.

But you're not coming up with a valid monetary amount.

Say I do it wrong, wouldnt the other account have the same problem if it also counts money?
Seems like your definition of units is wrong. How can I sell .34 of an HVAC unit?
Btw for those wondering what to do instead is to simply use https://tigerbeetle.com/
I found this paragraph

> Prerequisites: TigerBeetle makes use of certain fairly new technologies, such as io_uring or advanced CPU instructions for cryptography. As such, it requires a fairly modern kernel (≥ 5.6) and CPU. While at the moment only Linux is supported for production deployments, TigerBeetle also works on Windows and MacOS.

wha— what? why?? they must be solving some kind of scaling problem that I have never seen

Thank you for posting this.

I’m trying to get my head around how to build a fairly complex ledger system (for managing the cost schedules in large apartment buildings where everyone might pay a different proportion and groups of apartments contribute towards differing collections of costs) and you’ve just massively accelerated my thinking. And possibly given me an immediate solution.

Have you used tigerbeetle in production?

  • 2mol
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
I mean, tigerbeetle looks extremely cool (I've watched the project develop since its inception), and I trust them to be rock-solid. But saying "just use this project that is very new and unproven, written in a new and unproven programming language" is just pretty unserious. At least talk about pros, cons, risks, and tradeoffs.
>very new and unproven, written in a new and unproven programming language

while i'd generally agree with this, in the case of TigerBeetle i think it's safe to trust their tech based on their test-suite/practices - one of the most impressive things I've seen in the past 5 years.

they extend the testing practices of FoundationDB (now spun out into Antithesis[0]), going a layer deeper to test for data integrity when there is disk io corruption. check out from ~20:30 in this demo:

https://youtu.be/sC1B3d9C_sI?feature=shared&t=1228/

[0] https://antithesis.com

Fintech is safe heaven for scam.
> A double-entry system is an accounting method that tracks money at both its source and destination.

Nope. Double entry bookkeeping means every transaction is recorded in (at least) two accounts.

To illustrate the difference (hopefully I get this right): A transaction needs to balance, left vs right.

For example, you receive a sales order for 1000€ of widgets, which cost you 600€. You ship them. You invoice for 1000€.

No money moved (hopefully you do get paid at some point though). However, you need to do some bookkeeping.

On the left side of the ledger (debits): Accounts receivable 1000€. Cost of goods sold 600€.

On the right side of the ledger (credits): Revenue goes up by 1000€. Inventory goes down by 600€.

These completely match. No money has moved, but the books are now up to date and balance.

Any transaction that does not balance should be rejected.

Double-entry accounting is like strongly typed programming languages.

Yeah it's a real pain to get started because you need to understand the core concepts first then fight to balance the transactions ("compile errors") before you have anything useful. And when your results are wrong, you know that at least it's not because of the basic stuff.

SV Tech Bros roasting themselves on HN.

Every engineer in London is laughing at this right now.

Wow double entry! Ledgers are hard! Wow. So true.

  • swyx
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> And yet, I used to work for a startup that, on every transaction, simply lost track of a couple of cents. As if they fell from our pockets every time we pulled out our wallets.

is this not the literal plot of Office Space? did you check for a Michael Bolton employee?

  • yfw
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Great post!
The blockchain industry is basically founded on incompetence. If you go back to early Silk Road and Mtgox you can see some shocking things. Like the founders were posting questions on Stackoverflow about database basics. They then ended up building systems with race conditions (withdrawal code for both systems.)

See, if you're an engineer (from a web background) you might think that using a regular backend language with a relational DB would be fine. But this service is typically optimized for concurrency. That might sound amazing. But what you really want is a simple queue. This ensures everything is applied in order and makes it impossible to accidentally have transactions execute on stale state. Plus, a queue can be executed amazingly fast -- we're talking Nasdaq scales. If you use a standard DB you'll end up doing many horrible broken hacks to simulate what would be trivial engineering had you used the right design from the start.

You've got other things to worry about, too. Financial code needs to use integer math for everything. Floating point and 'decimals' lead to unexpected results. The biggest complexity with running large-scale blockchain services is having to protect 'hot wallets.' Whenever an exchange or payment system is hacked the target is always the funds that sit on the server. When the industry began there were still many ways to protect the security of these hot wallets. The trouble is: it required effort to implement and these protocols weren't widely known. So you would get drive-by exchanges that handled millions of dollars with private keys sitting on servers ready to be stolen...

Today there are many improvements for security. New cryptographic constructs like threshold ECDSA, hardware wallets, hardware key management (like enclaves), smart contract systems (decentralized secret sharing... and even multi-sig can go a long way), and designs that are made to be decentralized that remove the need for a centralized deposit system (still needs some level of centralization when trading to a stable coin but its better than nothing.)

I would say the era where people 'build' ledgers though is kind of over. To me it appears that we're organizing around a super-node structure where all the large 'apps' handle their own changes off-chain (or alternatively based on regular trust.) The bottom layer will still support payments but it will be used less often. With more transaction activity happening on layers above. I think its still important to scale the chain and make it as secure as possible. Bitcoin has a unique focus here on security above everything else. Making it ideal for long-term hedges. For every day stuff I can't think of any chains that have more credible R & D than Ethereum. They seem to even have been doing research on the P2P layer now which traditionally no one has cared about.

> Financial code needs to use integer math for everything. Floating point and 'decimals' lead to unexpected results.

Decimals are integers. There's no difference between integer math and decimal math, only in the meaning of the bit pattern afterwards.

  • lmm
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> Decimals are integers

No, most decimal types are decimal floating-point. (So, sure, technically redundant, but a lot of people say "floating point" to mean IEEE 754 specifically, even if that's technically incorrect).

[dead]
[dead]
The whole "move fast and do a shit job" is laughably stupid. It's pure incompetence and egotism and a product of excess salary in the market.

Doing it the right way doesn't take any longer than doing it the shitty way. Successful startups focus on extreme minimalism, not focus on doing the worst possible job.

  • dang
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Ok, but please don't fulminate on HN. This is in the site guidelines: "Please don't fulminate."

https://news.ycombinator.com/newsguidelines.html.

Except if you don't actually know what to do, or even what concepts to apply. This is where I see many start ups fall down.
[dead]
[flagged]
  • lmm
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
OK but apparently they did get to make the startup mistakes? They built the quick thing that worked well enough, got some customer traction, and then when they had bugs they were able to rework it and continue.

Frankly I'm not even convinced that double-entry is the sole right answer in this space. There are things you need to be able to represent, but the original reasons for doing double-entry (making errors when subtracting figures) no longer apply.

(I've worked at investment banks and fintech startups)

Hmm, can you elaborate on the original reasons for double entry and them not applying any.kre? I'm not in that space and double entry always seemed an extremely weird, arbitrary requirement / paradigm. Thanks!
The main point of double-entry account keeping is the notion that money never vanishes, it's always tracked as going somewhere.

I think this tends to get misrepresented by people trying to literally keep two entries like we were working with pen and paper book-keeping though.

Because if I have a simple list of transactions, I can easily use a SQL query to recreate a view of double-entry book-keeping. It's perfectly fine to just record the amount of money, and two entity names in a table.

Coz the whole point of the system is that you can explain where the money went at any given time, referenced against something which should be independently auditable: i.e. a vendor receipt which in turn says "there's a physical item of X we gave you".

The "double entry" system's actual merit is when there's multiple people keeping the books. i.e. if you're in a business and your department is spending money on goods, then when you're doing that those transactions should be getting reflected in the shipping departments books (owned by a different person) and maybe your accounting department or the like. The point is that there have to be actual independent people involved since it makes fraud and mistakes hard - if you say you bought a bunch of stuff, but no one else in the business received it or the money to pay for it, then hey, you now need to explain where the money actually is (i.e. not in your pocket).

Cool post, wish it existed 2 years ago when we started building Pave Bank, or 10 years ago when we started building Monzo.

If you're starting a bank or need a ledger these days (and aren't using a core banking provider that has one), then i usually recommend Tiger Beetle.