Banks have to follow strict rules to account for where all the money goes. But the way fintechs work, they usually just have one or a couple underlying "FBO" accounts where all the pooled money is held, but then the fintech builds a ledger on top of this (and, as the article points out, to varying levels of engineering competence) to track each individual customer's balance within this big pool of money. In Synapse's case, their ledger said the total amount of all of their individual customer balances ended up being much more than the actual funds held in the underlying FBO accounts. Lots of folks are assuming fraud but I'm willing to put money that it was just a shitty, buggy ledger.
FWIW, after seeing "how the sausage is made", I would never put money into a fintech depository account. Use a real bank. Fintechs also often put out the fake promise that deposits are FDIC insured, but this only protects you if the underlying bank goes belly up, not if the fintech loses track of your money.
See https://www.forbes.com/sites/zennonkapron/2024/11/08/what-th...
Backed by Andreesen Horowitz who are conducting a scorched earth jihad against all government regulation.
https://finance.yahoo.com/personal-finance/synapse-bankruptc...
After an asset bubble and collapse people will understand why we have a lot of the regulations from the 1930s.
In previous crises people could depend being educated by mostly responsible media . Today both mainstream and social are entertainment first don’t care for truth or their role in educating the society .
It is more likely they will be taught to blame some boogeymen who have nothing to do with the problem rather address the real one .
Just a minor point I would say it is ~120 years rather than 70. There was influential journalism through the early 20th century when professional journalism took shape which influenced policy of note the anti trust actions taken under Sherman Act in early 1900s.
Very briefly you are not wrong, yellow journalism has a long history is not new. In America from the times of federalist papers through slavery and jim crow and antisemitism of the 30s, to civil rights and into modern times it has been a powerful tool to shape public opinion.
There is nuance to this however, the era of professional journalism has been brief, only 100 or so years. In that time, media had the most impact in shinning the light on the truth, notably with reporting on Watergate, Pentagon papers or Hershey on Hiroshima and so on, that era is coming to an end.
As the cost of publishing drops orders of magnitude in every generation of technology as it happened with cheap and fast printing press, radio, broadcast and then cable TV and finally the internet and mobile, the problem becomes more massive and much harder to regulate, and also drops in the quality of public discourse and nature of truth.
Basically it boils down to we had a good(relatively) 100 year run with media and corresponding improvement in civil liberties and good governance, we can no longer depend on educated public taking decisions sooner or later in the right direction like we have last century or so.
1. Bring back those laws requiring fairness of media representation.
2. Force standardized disclosure of sponsored content of any type (total, segments, placement). Many countries already do this. Standardized = big, high contrast "Ad" sign in the corner with mandatory size proportional to content size.
3. Mandate providing sources.
4. Treat all influencers with an audience above NNN followers (10000?) as mass media.
5. Require that widely shared content is fact checked and that fact checking is automatically included while sharing and provide recourse for fact checking up to the legal system.
6. For sensitive topics (politics, primarily) require AML and KYC disclosures of funding, primarily to find foreign funding sources and propaganda.
However, you know, vested interests, the bane of humanity.
There is no way for this not being censorship and not being used to suppress less powerful opposition. Which is exactly how it was used in the past. Plus, just look what both sidesm currently does - it motivates journalists to write as if both sides were equal in situation where they clearly are not.
> Require that widely shared content is fact checked and that fact checking is automatically included while sharing and provide recourse for fact checking up to the legal system.
Fact checking is irrelevant to public opinion. And again, it is not that difficult to bias it.
Fairness of media representation seems hard to define and prone to abuse.
But mandating the the financial conflicts be disclosed and ads labeled seems reasonable.
Of course anyone should be free to publicly say anything, however untrue it might be.
Should they be free to broadcast their nonsense to million of people?
I don't know, but I do feel these are two different things.
Spot on, today you can do that as close to free as possible. In the eras past that was not possible it was expensive so only few could do it and that served as a moderating influence, it was not easy for fringe beliefs to become mainstream. The gatekeeping had the downside of suppressing voices particularly minority and oppressed voices so it was not all rosy.
The only thing we know is we can no longer use the past as reference to model how the future of politics , governance or media will be, which institutions will survive and in what distorted versions in say even 10-30 years.
> Spot on, today you can do that as close to free as possible.
Are you sure?
You can author a tweet for free, yeah. Then you let Musk do the broadcasting if he so pleases. Users have no control over the broadcasting, platform do.
Someone who „steals less” is not better.
Still fucking thieves.
Some men with some power using young starry eyed young people to grab much more power from incumbents.
Prior to one of these hiccups, I hypothesized, given how shitty the codebase was, that they must be tracking this stuff poorly.
This led to an argument with my boss, who assumed things magically worked.
Days later, we received an email announcing an audit one one of these accounting discrepancies.
JPMC proposed using crypto, internally, to consistently manage cash flow.
Not sure if it went anywhere.
If any discrepancies are found that persist over some time horizon it can be cause to stop all activity.
Wireshark has bugs, yes. Mostly in the dissectors and in the UI. But the packet capture itself is through libpcap. Also, to point out the obvious: pcap viewers in turn are auditable if and when necessary.
SPAN ports are great for network troubleshooting. They're also nice for security monitors, such as an intrusion detection system. The IDS logically sees traffic "on-line," but completely transparent to users. If the IDS fails, traffic fails open (which wouldn't be acceptable in some circumstances, but it all depends on your priorities).
No, really, I get where you and your parent are coming from. It is a low probability. But occasionally there is also thoroughly verified application code out there. That is when you are asking yourself where the error really is. It could be any layer.
Parsing the pcaps is much more prone to bugs than capturing and storing, but that’s easier to verify with deserialize/serialize equality checks.
And how did I get that figure?
I'm going to fold pcap overhead into the per-message size estimate. Let's assume a trading day at an exchange, including after hours activity, is 14 hours. (~50k seconds) If we estimate that during the highest peaks of trading activity the exchange receives about 2M messages per second, then during more serene hours the average could be about 500k messages per second. Let's guess that the average rate applies 95% of the time and the peak rate the remaining 5% of the time. That gives us an average rate of about 575k messages per second. Round that up to 600k.
If we assume that an average FIX message is about 200 bytes of data, and add 50 bytes of IP + pcap framing overhead, we get to ~250 bytes of transmitted data per message. At 600k messages per second, 14 hours a day, the total amount of trading data received by an exchange would then be slightly less than 710GB per day.
Before compression for longer-term storage. Whether you consider the aggregate storage requirements impressive or merely slightly inconvenient is a more personal matter.
0: https://robertwray.co.uk/blog/the-anatomy-of-a-fix-message
It could be funny though because you could be able to bump up your archive storage requirements by changing an IP address, or have someone else do that. But that's life.
It generally seems to be a thing in trading: https://databento.com/pcaps
There is also this (though this page does not specify what pcap means): https://www.lseg.com/en/data-analytics/market-data/data-feed...
Commonly used in finance.
There are systems you can buy (eg by Pico) that you mirror all traffic to and they store it, index it, and have pre-configured parsers for a lot of protocols to make querying easier.
Think Splunk/ELK for network traffic by packet.
Most modern trading systems performing competitive high frequency or event trades have performance thesholds in the tens of nanos, and the only place to land at that sort of precision is running analysis on a stable hardware clock.
Some firms will also capture market data (ITCH, PITCH, Pillar Integrated) at the edge of the network at a few different cross connects to help evaluate performance of the exchange’s edge switches or core network.
As a contractor, I helped do some auditing on one crypto exchange. At least they used a proper double-entry ledger for tracking internal transactions (built on top of an SQL database), so it stayed consistent with itself (though accounts would sometimes go negative, which was a problem).
The main problem is that the internal ledger simply wasn't reconciled with with the dozens of external blockchains, and problems crept in all the time.
I know you're not arguing in their favor, just describing a reality, but the irony of that phrase is through the roof :-)))
Especially the "centralized crypto".
The reason they are now called "centralised crypto exchanges" is that "decentralised crypto exchanges" now exist, where trades do actually happen on a public blockchain. Though, a large chunk of those are "fake", where they look like a decentralised exchange, but there is a central entity holding all the coins in central wallets and can misplace them, or even reverse trades.
You kind of get the worst of both worlds, as you are now venerable to front-running, they are slow, and the exchange can still rug pull you.
The legit decentralised exchanges are limited to only trading tokens on a given blockchain (usually ethereum), are even slower, are still vulnerable to front-running. Plus, they spam those blockchains with loads of transactions, driving up transaction fees.
Yikes, how hard is it to just capture an immutable event log. Way cheaper than running crypto, even if only internally.
Oddly enough, I worked at a well known fintech where I advocated for this product. We were already all-in on AWS so another service was no biggie. The entrenched opinion was "just keep using Postgres" and that audits and immutability were not requirements. In fact, editing ledger entries (!?!?!?) to fix mistakes was desirable.
If you're just using PG as a convenient abstraction for a write-only event log, I'm not completely opposed; you'd want some strong controls in place around ensuring the tables involved are indeed 'insert only' and have strong auditing around both any changes to that state as well as any attempts to change other state.
> In fact, editing ledger entries (!?!?!?) to fix mistakes was desirable.
But it -must- be write-only. If you really did have a bug fuck-up somewhere, you need a compensating event in the log to handle the fuck-up, and it better have some sort of explanation to go with it.
If it's a serialization issue, team better be figuring out how they failed to follow whatever schema evolution pattern you've done and have full coverage on. But if that got to PROD without being caught on something like a write-only ledger, you probably have bigger issues with your testing process.
[1] https://docs.aws.amazon.com/qldb/latest/developerguide/what-...
[2] https://aws.amazon.com/blogs/database/replace-amazon-qldb-wi...
https://www.sec.gov/enforcement-litigation/whistleblower-pro...
Isn’t this how crypto coins work under the hood? There’s no actual encryption in crypto, just secure hashing.
Internally in your company you're not going to spend millions of $'s a year in GPU compute just to replace a database.
Yeh, but that's kinda my point: if your primary use case is not "needs to be distributed" then there's almost never a benefit, because there is always a trusted authority and the benefits of centralisation outweigh (massively, IMO) any benefit you get from a blockchain approach.
See our Show HN: https://news.ycombinator.com/item?id=42184362
We’ve seen interest from trading groups for edge collaboration, so multi-user apps can run on-site without cloud latency.
Perhaps worth seeding the convo with a remark about finality.
Perhaps instead of your ideas, it’s worth seeding your own personal make up with a firm statement of ethics??
Are you the kind of person who will hijack conversations to promote your product? Or do you have integrity?
Just purely out of concern for your business, do you have a cofounder who could handle marketing for you? If so, consider letting her have complete control over that function. It’s genuinely sad to see a founder squander goodwill on shitty marketing.
Spam would be raising the topic on unrelated posts. This is a context where I can find people who get it. The biggest single thing we need now is critical feedback on the tech from folks who understand the area. You’re right I probably should have raised the questions about mergability and finality without referencing other discussions.
Because I don’t want to spam, I didn’t link externally, just to conversation on HN. As a reader I often follow links like this because I’m here to learn about new projects and where the people who make them think they’ll be useful.
ps I emailed the address in your profile, I have a feeling you are right about something here and I want to explore.
I think you need to reread the conversation, because you did post your marketing comment while ignoring the context, making your comment unrelated.
If you want it distilled down from my perspective, it went something like this:
> Trog: Doubts about the necessity of Merkle trees. Looking for a conversation about the pros and cons of Merkle trees and double ledger accounting.
> You: Look at our product. Incidentally it uses Merkle trees, but I am not going to mention anything about their use. No mention of pros and cons of Merkle trees. No mention of double ledger accounting.
Merkle proofs are rad b/c they build causal consistency into the protocol. But there are lots of ways to find agreement about the latest operation in distributed systems. I've built an engine using deterministic merge -- if anyone wants to help with lowest common ancestor algorithms it's all Apache/MIT.
While deterministic merge with an immutable storage medium is compelling, it doesn't solve the finality problem -- when is an offline peer too out-of-date to reconcile? This mirrors the transaction problem -- we all need to agree. This brings the question I'm curious about to the forefront: can a Merkle CRDT use a Calvin/Raft-like agreement protocol to provide strong finality guarantees and the ability to commit snapshots globally?
Apologies for the noise.
With a blockchain, you simply go back, "fork", apply a fixed transaction, and replay all the rest. The difference is that you've got a ledger that's clearly a fork because of cryptographic signing.
With a traditional ledger, you fix the wrong transaction in place. You could also cryptographically sign them, and you could make those signatures depend on previous state, where you basically get two "blockchains".
Distributed trust mechanisms, usually used with crypto and blockchain, only matter when you want to keep the entire ledger public and decentralized (as in, allow untrusted parties to modify it).
No you don’t. You reverse out the old transaction by posting journal lines for the negation. And in the same transactions you include the proper booking of the balance movements.
You never edit old transactions. It’s always the addition of new transactions so you can go back and see what was corrected.
That's not how accounting works. You post a debit/credit note.
You're handwaving away a LOT of complexity there. How are users supposed to trust that you only fixed the transaction at the point of fork, and didn't alter the other transactions in the replay?
With a proper distributed blockchain, forks survive only when there is enough trust between participating parties. And you avoid "editing" past transactions, bit instead add "corrective" transactions on top.
A good option is “what would happen if we” instead of anything involving the word “just”.
double entry with strong check that ensure its always balance fix this
That is like half of the plot of Office Space
https://lex.substack.com/p/podcast-what-really-happened-at-s...
While I haven't listened yet, one thing I don't really buy when it comes to blaming Evolve is that it should fundamentally be Synapse's responsibility to do reconciliation. This is what completely baffled me when I first worked with another BaaS company - they weren't doing any reconciliation of their ledgered accounts with the underlying FBO balances at the partner bank! This was insane to me, and it sound likes Synapse didn't do it either.
So even if Evolve did make accounting mistakes and have missing transactions, Synapse should have caught this much earlier by having regular reconciliations and audits.
There's a full transcript (with some images) below the player btw.
Meanwhile this article said Synapse withdrew from Evolve the end user funds. Mr. Holmes of Evolve said the bank “transferred all end user funds” to other banks at the request of Synapse, but declined to identify them
https://www.nytimes.com/2024/07/09/business/synapse-bankrupt...
Synapse problem was fundamental and it stems from the same mistake OP is making: never ever build your own, homegrown ledger if you can avoid it.
In the end of the day you provide a full stack or just do UI/marketing. This is a good old vertical integration dilemma.
I had my unfounded suspicion it was some internal subtle theft going on, but incompetence is probably a better explanation.
That stuff like this is in order is the foundation of kapital societies and is taken quite seriously.
I wonder how common issues like these are...
I also had it happen one time, the bank eventually figured it out and fixed some error on their part.
Had you watched Office Space recently?
* End customers are really a customer of Yotta, a (silly IMO) fintech where interest was essentially pooled into a sweepstakes prize.
* Yotta was a customer of Synapse - they used Synapse BaaS APIs to open customer accounts (again, these accounts were really just entries in Synapse's ledger, and they underlying funds were supposed to be stored in an FBO account on Evolve).
* Synapse partnered with Evolve, who is the FDIC insured bank.
Synapse went bankrupt, and Yotta customers are finding out they can't access their money. But the bankruptcy court is at a loss as to really what to do. FDIC isn't getting involved, because as far as they can tell, Evolve hasn't failed. Synapse is basically out of the picture at this point as they are bankrupt and there isn't even enough money left to do a real audit, and Yotta is suing Evolve alleging they lost customer funds. But, in the meantime, Yotta customers are SOL.
If you had a direct relationship with an FDIC-insured bank, and they lost your money, there would be a much clearer path for the FDIC to get involved and make you whole (up to $250k).
If your bank and you have a disagreement over how much money should be in your account, then FDIC wouldn't be involved?
But, in all cases, there is a clear process to ensure no money goes missing, either through fraud, mistakes or insolvency. Banks require the blessings of their regulators to operate, so they are highly incentivized to ensure all of their accounting is accurate. With fintechs no such regulatory framework exists (at least not yet).
This case is like FDIC be involved because say Robinhood or stripe or Shopify or any other saas app went bankrupt and their customers are mad they lost money
The difference between them and some bullshit thing like Yotta is you are the customer of record for the account. The bullshit aspect of Wealthfront is they front real services with automated investment services. Yotta was pooling customer funds at some other bullshit fintech who was then putting those funds (or not) into one big account.
Personally, handling cash is an old business and I’m really conservative about who handles mine. Innovation is risk, especially when the money behind it is focused on eliminating accountability. Yotta should have been illegal. Keep accounting boring.
There is always residual risk between the bank and you with the fintech company. That’s what got Yotta in trouble ,they basically outsourced the heavy lifting of managing ledgers to synapse which you as customer have no control over.
For most people that risk is not worth losing their already modest savings over , that is why banks are regulated and FDIC exists after all.
I’m highly skeptical of this claim. Every bank I’ve worked with adheres to their records requirements like it’s life or death (because it kind of is for the bank).
Tell your friend he’s exposing himself to hard prison time if he’s not just making up a story. If his boss tells him that they don’t have budget to retain the logs he should be whistle blowing, not violating federal banking laws to save what is a rounding error in their IT budget.
The main problem is that accounting defaults to full fungibility of monetary amounts represented in a ledger, which has the effect of losing track of the precise mapping between assets and liabilities, so you end up in a position where you simply cannot tell precisely to a bank _who_ are the actual customers they owe money to.
[1] https://www.formance.com/blog/engineering/warehousing-promis...
Would you count Wealthfront as a fintech? I was finding their marketing compelling, but this thread makes me think twice.
It's possible (probable?) that they have better accounting controls. But I personally wouldn't keep anything above SIPC limits at Wealthfront (or any near competitor like Robinhood, M1, etc). And I'd be keeping records on my own.
And I'd make peace with the fact that SIPC resolution is a completely different ballgame from FDIC turnaround for assets held directly at an insured bank (which is like single business day don't-even-notice fast). I.e. not use it as the sole source of emergency funds, have months of expenses at a different institution, etc.
Well yes and no - synapses pass-through-banking wasn't covered by SIPC, and neither would wealthfronts comparable product. But keeping it just in a standard Wealthfront (or synapse even) sweep account with no underlying banking shenanigans happening, is different from SIPC's perspective.
Just keeping stocks (up to $500k) or sweep (up to $250k) at a SIPC broker is probably okay, even if it's a new fintech. Fooling around with their weird passthrough stuff, less so.
Bugs are as likely to show more and less money than there really are. But bugs in production will almost always show more :)
If there was no malfeasance then no money would be gone. The totals would add up, they just wouldn’t no know who was owed what. Since the totals don’t add up, someone got the extra.
> Fintechs also often put out the fake promise that deposits are FDIC insured
Does this still happen?It really stretches the belief into fiat money to the absolute limit.
Kate: The worm eats a few cents from each transaction.
Dade: And no one's caught it because the money isn't really gone. It's just data being shifted around.
I'm not sure there's much difference. Intent only matters so much.
You can argue negligence over mistake. But fraud definitely requires intent.
When the banks do this it's called "fractional reserve banking", and they sell it as a good thing. :)
In fractional reserve banking, money that is loaned out is accounted for as liabilities. These liabilities subtract from the overall balance stored (reserved) at the bank. The bank is not printing money new money, no matter how many times this idea gets repeated by people who are, ironically, pumping crypto coins that were printed out of thin air.
I think it’s incredible that cryptocurrencies were literally manifested out of bits, but the same people try to criticize banks for doing this same thing (which they don’t).
It is now widely accepted that bank lending produces new money[1][2]
[1] https://www.bankofengland.co.uk/-/media/boe/files/quarterly-...
From a large eurozone bank : https://group.bnpparibas/en/news/money-creation-work
If customer A deposits $100 in cash, and customer B borrows $100 from the bank and deposits it back in the bank, M1 goes up because there are now two checking accounts with $100 in it. That the bank's internal bookkeeping balances doesn't change the fact that the Fed considers that more money exists.
The Fed considers that more M1 exists and the same amount of M0 exists. Both are considered monetary aggregates, but M0 is the "money" the bank needs to worry about to stay solvent, and it can't "print" that.
Whilst it's semantically correct to refer to both M1 and M0 as money, it's pretty clear that it's wrong for people people to elide the two to insinuate that banks are printing themselves balances out of thin air like token issuers or insolvent companies that screwed up their customer balance calculations, which is what the OP was covering.
And the Fed wouldn't consider more money to exist if the bank's internal bookkeeping didn't balance...
Yes, that is how a fractional reserve banking works. But that is not how the current banking system works.
* https://www.stlouisfed.org/publications/page-one-economics/2...
* https://www.pragcap.com/r-i-p-the-money-multiplier/
Banks do not lend out deposits. This was called the "Old View" by Tobin in 1963:
* https://elischolar.library.yale.edu/cowles-discussion-paper-...
The Bank of England has a good explainer on how money is created:
* https://www.bankofengland.co.uk/quarterly-bulletin/2014/q1/m...
See also Cullen Roche:
* https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1905625
Sure, those liabilities are accounted for in an eventually consistent matter by reconciling imbalances on interbank lending markets at the end of the day with the government topping up any systemic shortfall rather than by counting out deposit coins in the vault
But that's fundamentally much closer to the "Old Money" view than to the OP's claim about fractional reserve being like an FBO inflating customer deposits by failing to track trades properly. All the credit extended by the bank is accounted for, and all of it that isn't backed by reserves is backed by the bank's obligations to someone else.
To be clear:
* Money is "loaned out" in the sense that a bank credits your account.
* Money is not loaned out in the sense that which "goes into" your account did not come out of someone else's account. Rather it was created 'out of thin air' by the bank without reference to anyone else's deposits.
I am familiar with your links, for quite some time actually.
I never said that the money came out of someone else's account.
What I did say was that it was accounted for as liabilities. It's the bank's liability to the loanee (or their bank), which the bank absolutely can be obliged to pay with reserves or cold hard cash (and it can only get these from borrowing, selling assets or customers paying cash into their account).
And so banks lend it out to people attached to a slightly larger liability repayable to them and keep track, because if they don't all this money they're "printing" represents losses in terms of obligations they can't "print" their way out of. That's quite different from the ledger screwup its being compared with, or indeed people creating tokens (not backed by debt or anything else) out of thin air to sell to other people
A corollary of this is that contra popular suggestions otherwise, the accounts net to zero and the bank obtains no gain from "printing money", only from interest earned on repayments.
To just expand a bit, I believe some of the confusion around printing of money comes from the way some economics reports are built. As a micro example, Assume a 10% required reserve, If Alice deposits $100 and the bank lends $90 to Bob. Alice ($100 deposits) + Bob ($90 cash) think they have $190 in total.
This is mainly useful for economists to understand, study, and report on. However, when the reports get distributed to the public, it looks like the banks printed their own money, as we now see $190 on the report when there is only $100 of cash in our example system.
Whether the system should work on a fractional reserve is it's own debate, but we need to know what it is to debate the merits and risks of the system.
Nobody deposits in a bank - it's just a retag of an existing deposit. The bank Debits a loan account with the amount owed, and Credits a deposit account with the advance. It's a simple balance sheet expansion in double-entry bookkeeping.
I'm really not sure why this myth persists given that central banks debunked the concept over a decade ago.
Loans create deposits, and those deposits are then converted into bank capital when a deposit holder buys bank capital bonds or equity.
[0]: https://www.bankofengland.co.uk/-/media/boe/files/quarterly-...
Then you'll see that for a bank to transfer to another bank the destination bank has to take over the deposit in the source bank (or swap that liability with another bank somewhere).
You have an infinite regress in your thinking.
[0]: https://new-wayland.com/blog/why-banks-pay-interest-on-depos...
Hi, this is factually incorrect and you should educate yourself before attempting any further condescending comments on Hacker News.
It worked as an actual check on money supply and went implemented properly was harder to manipulate
The idea such a system could function in todays world is strange to me.
So now if you are unhappy with the monetary system you are automatically a crypto bro and can be dismissed?
Secondly, the problem with fractional reserve banking is as follows: Suppose Larry makes a deposit of one dollar, which the bank guarantees can be retrieved at any time. The bank loans this dollar to Catherine, which uses it to buy something from Steve. Now Steve has one dollar, which he deposits with the bank. The bank lends this dollar to Catherine2, which uses it to buy something from Steve2. And so on, up to CatherineN and SteveN
Now, in so far as transactions can take place in the economy with bank IOUs, which are considered perfect money substitutes, the amount of money in the economy has been multiplied by a factor of N. Where before only Peter had a dollar (or a dollar IOU, which are supposedly the same), now Pere AND Steve, Steve2, up to SteveN all have a dollar IOU. This leads to an inflationary pressure.
Now it is true that upon the Catherine's repaying of the debt, these extra dollars will go away. However, in reality there is no such thing as negative dollars. The supply of money has been increased by the bank.
An objection could be raised that Catherine's extra demand for money to pay off her debt will exactly offset the extra supply of money. This is nonsense! Everyone demands money all the time. If Catherine did not demand money to pay off her loan, she would demand money in order to satisfy her next most urgent want which could be satisfied by money. The increase in the demand for money is negligible.
Licensed banks can and do write loans at any time without having any deposits to 'lend out'. In doing so they create both the loan (an asset) and a deposit (a liability) simultaneously from thin air. The books immediately balance.
The deposit created is then paid to the borrower and the liability vanishes. The bank is left with only the asset - the one that they created from thin air.
For short term liquidity a bank can always use the overnight lending facility at the central bank. Doing so just makes all their loans far less profitable as this is at a floating daily rate.
In reality the limit to which the money supply grows is not dictated by 'fractional reserves', but solely by interest rate policy and the commercial viability of being able to make loans and demand in the economy.
The liability can never vanish - balance sheets have to balance. Bank liabilities are what we call 'money'. Hence how you are 'in credit' at the bank.
Technically the commercial bank lends to the central bank. That's why they receive interest on it.
That's just a loan like all the other loans on the asset side. The difference is that the interest rate is set by the borrower not the lender.
Holding a deposit is just different name for a particular type of loan.
The loan will be accounted to loan book and deposit book on the local(!) banking system level; if the money moves out of the bank, it has to go through central banking money circle - on this level, the loan amount is _NOT_ created, this account can be "filled" only with incoming transactions from other banks (customer deposits!) Thats the reason why a bank needs deposists: to make payments possible, since the number on the central banking account is always smaller than the number of all loans on the local banking system level.
https://www.bankofengland.co.uk/-/media/boe/files/quarterly-...
Choice quote from page 1:
"Money creation in practice differs from some popular misconceptions — banks do not act simply as intermediaries, lending out deposits that savers place with them, and nor do they ‘multiply up’ central bank money to create new loans and deposit"
The part you are talking about is illustrated in Figure 2.
The transfer of central bank reserves between banks doesn't change the fact that once a loan is written new money enters circulation.
With a 10% reserve requirement, a 1,000,000 USD deposit will result in up to 10 times that much money being lent out.
The formula is 1/r, where r is the reserve requirement.
Banks can lend up to an allowed multiple of their cash or equivalent reserves (gold standard regulation), and in the Basel era are also regulated on the ratio of their capital reserves to their loans. This acts to stop hyperflationary expansion, but there is a feedback loop between new deposits and new capital so the system does still expand slowly over time. This may be beneficial.
In engineering terms, Banks statistically multiplex asset cash with liability deposits, using the asset cash to solve FLP consensus issues that arise when deposits are transferred between banks. It´s actually quite an elegant system.
And what is the current reserve requirement in the US? Zero.
https://www.federalreserve.gov/monetarypolicy/reservereq.htm
Edit: Whoops, someone beat be to it below.
> and in the Basel era are also regulated on the ratio of their capital reserves to their loans
Reducing the reserve ratio to zero doesn't mean that banks can create unlimited amounts of money out of thin air. It just means that regulation by capital requirements has now fully superseded regulation by reserve ratio.
In theory those capital requirements are a better and finer-grained regulatory tool, capturing the different risk of different classes of asset. In practice that can fail--for example, the SVB collapsed insolvent because it was permitted to value bonds above their fair market value if it claimed they'd be held to maturity. That failure was in the details though, not the general concept.
As announced on March 15, 2020, the Board reduced reserve requirement ratios to zero percent effective March 26, 2020. This action eliminated reserve requirements for all depository institutions.
So in effect, the multiplier is infinity.https://www.federalreserve.gov/monetarypolicy/reservereq.htm
there are tons of balance-sheet-metrics which have to be aligned, in theory you are right; in practice, there are a lot differences.
So for every $1 deposited, I can lend $0.90 but must hold $0.10 as my reserve?
At the point I make a loan, 2 things happen on my balance sheet: I have a new liability to you (the increased balance in your account), and I have a new asset (the loan that you’re expected to pay back). They cancel each other out and it therefore seems as if I’m creating money out of thin air.
However, the moment you actually use that money (eg to buy something), the money leaves the bank (unless the other account is also at this bank, but let’s keep it simple). Liabilities on the balance sheet shrink, so assets need to follow. That needs to come from reserves because the loan asset keeps its original value.
The reserve comes from the bank, not from you. Added layer here: Banks can borrow money from each other or central banks if their cash reserves runs low.
Finally: it tends to be the case that the limit on lending is not the reserves, but on the capital constraints. Banks need to retain capital for each loan they make. This is weighed against the risk of these loans. For example: you could lend a lot more in mortgages than in business loans without collateral. Ask your favorite LLM to explain RWAs and Basel III for more.
"Everything should be made as simple as possible but no simpler."
You're omitting the thing that causes the money to be created out of thin air. If the other account is at the same bank, now that customer has money in their account that didn't previously exist. And the same thing happens even if the money goes to a customer at another bank -- then that bank's customer has money in their account that didn't previously exist. Even if some reserves are transferred from one bank to another, the total reserves across the whole banking system haven't changed, but the total amount of money on deposit has. And the transfers into and out of the average bank are going to net to zero.
The created money gets destroyed when the loan is paid back, but the total amount of debt generally increases over time so the amount of debt-created money goes up over time as banks make new loans faster than borrowers pay them back.
Your loan is loan+interest; when your loan is created, we do not create the interest-part on it - the interest-part is the rest that you have to pull from someone else, since bank gives you loan X - but asks loan X + interest Y back from you -> thats the reason why there needs to be another fool somewhere else who is then again taking a loan.
its one of the main architectural choices of our money architecture :-D
Suppose a mechanic takes out a mortgage to buy a house. The bank uses the interest on the loan to pay part of the bank manager's salary. Then the bank manager pays the mechanic to fix his car. Nobody inherently has to take out another loan for the borrower to pay back the bank.
The main reason debt keeps going up is that housing prices keep getting less and less affordable, requiring people to take on more and more debt to buy a house or pay rent.
The GP is completely wrong on how modern finance works. Banks do not lend out deposits. This was called the "Old View" by Tobin in 1963:
* https://elischolar.library.yale.edu/cowles-discussion-paper-...
The Bank of England has a good explainer on how money is created:
* https://www.bankofengland.co.uk/quarterly-bulletin/2014/q1/m...
See also Cullen Roche:
* https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1905625
Yes, they do not need customer deposits to create loans and increase their balance sheet, there are just some guys like you and me, putting the amount in the balance sheet and clicking save (simply put)
But yes, they need to have at least some customer deposists to make payments happen, since if they do not have any deposits, their central banking account would be empty, therefore none of your loans could actually leave your bank since the transaction wont happen. (i'm talking from perspective of TARGET2 / ECB / EURO system)
I saw this during the pandemic, and it bewildered me how little coverage of it there was. How is this not going to cause another financial catastrophe? And if we're so sure it isn't, then what makes people think they under economics so well, given that they clearly thought a minimum was necessary just a few years ago?
The banks in Australia, Canada, etc have had zero reserve requirements for thirty years:
* https://en.wikipedia.org/wiki/Reserve_requirement#Countries_...
The US had reserve requirements leading up to the 2008 GFC which started off with mortgages/loans, and yet those requirement didn't stop the disaster. Canada et al did not have requirements, and yet it didn't have a financial meltdown (not itself, only as 'collateral damage' to what happened in the US).
The equivalent for the USA would be the Federal Funds Rate, I suppose. The reserve requirement is just one tool among many.
the do lend out more than they have _currently_ as deposits on their central banking accounts, you have to care about "duration transformation" - JPM has billions of loans and deposits, though most of the deposits may be currently "out of the house" (borrowed) Now, for sure could JPM increase the balance sheet even more by another loan, if they still meet whatever balance-sheet-restrictions and if they have enough money on their central banking account. (sure, if the loan is for a customer within the same institution, then there is no difference)
How dare you criticize our holy banking system /s
I had previously built things like billing systems and small-scale OLTP web applications, where the concept of even asking the question of whether there could be any acceptable data loss or a nonzero error rate hadn't occurred to me. It was eye-opening to see, not so much that if you're doing millions of qps, some are going to fail, but more the difference in engineering attitude. Right this instant, there are probably thousands of people who opened their Gmail and it didn't load right or gave them a 500 error. Nobody is chasing down why that happened, because those users will just hit reload and go on with their day. Or from another perspective, if your storage has an impressive 99.99999% durability over a year, when you have two billion customers, 200 people had a really miserable day.
It was a jarring transition from investigating every error in the logs to getting used to everything being a little bit broken all the time and always considering the cost before trying to do something about it.
The rule of thumb I’ve seen at most places (running at similar scale) is to target one data loss, fleet wide, per century.
That usually increases costs by << 10%, but you have to have someone that understands combinatorics design your data placement algorithms.
The copyset paper is a good place to start if you need to understand that stuff.
99.(9)x percent durability is almost meaningless without a description of what the unit of data is, what a loss looks like. There's too many orders of magnitude between a chunky file having an error, a transaction having an error, a block having an error, a bit having an error...
But yeah, it was the number that was humbling.
There are definitely areas where 4 nines are good enough, but there are just as many areas where the aren't.
Sometimes you need engineers that have some other type of education, maybe accounting, maybe finance, maybe biology. I always thought that the most important part of my career was understanding every industry I built for, deeply, and knowing experts in those areas so you could ask the really important questions. That is problem solving and engineering. The rest is programming/coding.
I’ve spent the majority of my career in tech with a finance angle to it, mostly sales and use tax compliance.
What I never fully appreciated was how much those accountants, controllers, and lawyers were rubbing off on me.
I was recently advising a pretty old startup on their ledgering system and was beyond appalled at what it looks like when a bunch of engineers with no finance or accounting background build an accounting system.
We don’t have to find magical accountant engineers either, it will wholly suffice if we sit actual accountants in with our engineering team during the design process.
After my design of their complete overhaul I had a friend who is a CPA completely review my work, we found a few holes in certain scenarios, but by and large we were good.
Money is a difficult engineering problem because with money comes all the human goofery that surrounds it.
Modern hiring practices are too rigid and filter-friendly for you to likely appear as a strong candidate based on the fact you have good accounting experience on top of your growing software skills.
What will really help you though, is having friends who work at a bank in the software departments. It's almost always who you know in this world. You need to network, network, network!
I don't say this to blow my own trumpet, only to say that the non-engineering leadership at the company in question were very invested in the product details and making sure I understood basic accounting principles.
That said, I went from that role to working in a billing system that was originally built by a team where leadership weren't invested in the details. The result was a lot of issues and frustration from Finance leadership. Storing monetary values in float was not the biggest issue, either.
That being said, maybe branch out of just looking at accounting/bookkeeping and market yourself as "I know how money works in software systems." Your skills are extremely transferrable and knowing the Finance expectations will make it easier to make good design choices further upstream.
It's the PM's job to work with engineering to ensure that the requirements are correct, and that the built product meets those requirements. And in an agile setting, those conversations and verifications are happening every single sprint, so something can't go for too long without being caught.
If you don't have a PM, then sure I guess your engineering team had better have deep domain knowledge. But otherwise, no -- it's not engineering's responsibility. It's products's responsibility.
Exactly, this what we try to advance our tech people to, someone with the combination of domain+systems knowledge.
It results in more independent+correct action which is much more powerful than having to specify significant details to less knowledgeable people.
By accident and not knowing any better as a young dev, I ended up building the billing logic from day one, and for better and worse building it in two places in the system (on a consumer-facing billing webpage, and on a separate backed process that generated invoices and charged credit cards.)
It turned out to be remarkably hard to keep them in sync. We were constantly iterating trying to get traction as we burned down our capital, releasing new products and services, new ways of discounting and pricing (per use, per month, first X free, etc), features like masterpayer/subaccounts for corporate accounts, user-assignable cost centers, tax allocation to those cost centers with penny allocation, etc such that new wrinkles and corner cases would keep popping up causing the numbers on my two screens/methods not to match.
Being personally responsible for the billing, I would go over all the invoices by hand for a couple days each month to insure they matched before we charged the credit cards and mailed out printed invoices as a final check to prevent mistakes. There was always/often some new problem I'd find affecting one or a small handful of customers which I would then fix the code before we billed. I never felt good letting go and not doublechecking everything by hand.
I thought about refactoring the billing logic to occur in one place to eliminate these mismatches and my manual crosschecking, but after a lot of thought I realized I wasn't comfortable with a single codebase and liked having two codebases as it helped me catch my own errors. I then just made it easier and easier to run automate and crosschecks between the two. The billing code was a little too gnarly to be proud of, but I was very proud of the outcome in how accurate our billing was, the lack of complaints and many near misses we avoided for many years. I do feel twinges of guilt for the complexity I left my successors but I still don't really regret it.
After that experience, the motivation for double entry bookkeeping has always made a lot of sense to me. I had sort of reinvented it in my own hacky way with double logic billing code to prevent my mistakes from causing problems for my customers...
The data team I ended up leading at a previous company, had an unfortunate habit of "losing" money - it wasn't real money being lost in transit elsewhere, but records of something we should charge a customer.
Or if the team wasn't losing revenue, it was double charging, etc. etc.
Took us 3 years of hard work to regain the trust of the business leaders.
Software is full of these systems.
There's an infinitely smaller financial incentive to making sure it works well and functions securely. Thus it rarely happens...
Who builds a financial system like that an considers it normal? The compensation is one thing, but you'd flee a service like that with all possible haste.
Especially important when explicitly saying you’ve done these things.
Sure, maybe that points you to the bugs, but so would writing basic tests.
This type of business can also have fun dealing with accruals. One can easily do a bunch of transactions, have them settle, and then get an invoice for the associated fees at variable time in the future.
A ledger is where every transaction balances to 0. It can involve multiple accounts, but the sum of all transfers between all accounts in a single transaction must sum to 0. This is the property of double entry that actually matters.
to clarify: "make it right" is the second step, and until you make things work correctly ("right"), that's where the work stops, you have to make the system work soundly. the "make it fast", as in, optimize, comes in only after you have got it right, that all correctness and soundness issues are resolved perfectly. then you can start optimizing it (making it run fast).
it has nothing to do with delivery speed. it has nothing to do with working quickly. it's about optimizing only as a last step.
perhaps the author is lamenting the fact that it is possible for something to sort of "work", but to be so far from being "right" that you can't go back and make it right retroactively, that it has to be "right" from the inception, even before it starts barely working?
This is my opinion as well and I've been involved in the audit of a fintech system where auditors had to download EVERYTHING into excel spreadsheets and make the numbers balance before they would sign off on the books. That took a lot of time and money I'm guessing made a difference of at least .1 unicorns in the liquidity even that took place 3 years later.
What happens in fast paced startups is that you ship what essentially is a MVP as soon as possible (i.e. you stop at the "make it work" step) because you need to build your customer base, finances, etc.
A better mantra would've been Facebook's "move fast and break things". But, that only works if you can fix the things later. You wouldn't do it if you're building an aircraft for example.
Given the context in which he used it, I think the misunderstanding you suggest in the first sentence is most likely. Immediately afterward he talks about the time pressure startups face.
Sure, single-entry bookkeeping might be easier and more normalized, but sometimes it is a good idea to just stick with the systems and abstractions that have been developed over centuries.
Just use double-entry bookkeeping unless you definitely need something else. Sure, it might be icky for the programmer inside you, but I think you'll be thankful if you ever need to get actual accountants involved to sort out a mismatch.
On a related note: does anybody know of any good resources for programmers in payments and adjacent fields? Something like an "Accounting for Programmers"?
https://martin.kleppmann.com/2011/03/07/accounting-for-compu...
https://www.moderntreasury.com/journal/accounting-for-develo...
[1] https://www.moderntreasury.com/journal/how-to-scale-a-ledger...
Worth reading once a year imo
[0]: https://www.winstoncooke.com/blog/a-basic-introduction-to-ac...
Get a grip on the accounting basics first. I built my own bookkeeping system with double entries and realized the design and the programming was the easy part.
You have to think about a sentence like a stack and read it slowly and carefully and take note of the push and pop operations.
Germans are like RPN where everyone else is a regular calculator.
> one and 100th of the data was missing
No idea what that means
> an advertisement of a person’s or a group’s promotion
What in god's name is an "advertisement of a group's promotion"?
Edit: ah, you probably spoke "one one hundredth" and got a transcription error.
>No idea what that means
1/100th of a dollar is a cent - goes towards the "missing cents" glossed over by calling that "dancing cents" in the blog post.
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Anything that moves money should be treated with upmost seriousness and be aware of as many historical mistakes as possible.
The four-part miniseries with Toby Jones (mentioned above in §Media:Dramatisation) was really good and goes over things pretty well:
I’m also surprised that this whole article starts by discussing stock trading but has no mention of how to represent stock trades. I assume they are “Sagas” consisting of money moving from the customer to the clearinghouse (or prime broker or PFOF provider or whatever) and shares moving from that provider to the account at which the shares are held. And maybe other associated entries representing fees? It seems to me that this is multi-entry accounting, which is quite common, and that entries don’t actually come in pairs as the article would like us to think.
In fact, we could call these values yin and yang, for all it mattered.
Also, I'm not able to really follow what he means by "money = assets in the future".
Money is money, but if you wanted to track the intermediate state until the customer gets receipt, you would use an In Transit account (Good In Transit / Service In Transit etc.)
Yet, it doesn't change the fundamental definition of the value in the accounting system. I think the author confuses an engineering concept (sagas, or thunks, or delayed but introspectable/cancellable actions in general) with accounting.
I’m guessing it’s one of two things:
1. A transaction might fail. If you enter a transaction into your bank’s website or your credit card company’s website, you should probably record it in your ledger right away. But the transaction might get canceled for any number of reasons. And the money will not actually move instantly, at least in the US with some of the slower money moving mechanisms.
2. In stocks and other markets, settlement is not immediate. A trade is actually a promise by the parties to deliver the assets being traded at a specific time or range of times in the future. One probably could model this with “in transit” accounts, but that sounds quite unpleasant.
FWIW, I’ve never really been happy with any way that I’ve seen accounting systems model accruals and things in transit. I’ve seen actual professional accountants thoroughly lose track of balance sheet assets that are worth an exactly known amount of cash but are a little bit intangible in the sense that they’re not in a bank account with a nice monthly statement.
I think the author is just wrong on that point, but the rest is sound. (Source: I've built bookkeeping software)
They're complicated to balance, so it's not commonly done in physical ledgers for sure, but in digital ledgers it's completely fine. You just have to make sure they do balance.
Orders are not relevant for ledgers. The system I describe is relevant for individual transactions -- for example, a single bank payment that pays for two outstanding invoices at once absolutely SHOULD create a single transaction with three legs: One out of the bank account, two to payables.
Anyway, this is just a simple example, but an invoice with VAT on it is IMO the most common example. Or, another one my software support: A bank transaction with embedded banking fees. Some do separate fees, not all. Currency conversion fees are another example.
1. Merchant takes out a loan for $5,000 and receives $5,000 in cash. • Assets (Cash) increase by $5,000 (Debit). • Liabilities (Loan Payable) increase by $5,000 (Credit). • Equity remains unchanged.
2. Merchant buys inventory for $1,000 cash. • Assets (Cash) decrease by $1,000 (Credit). • Assets (Inventory) increase by $1,000 (Debit). • Total assets remain unchanged, and liabilities and equity are unaffected.
3. Merchant sells all inventory for $1,500 cash. • Assets (Cash) increase by $1,500 (Debit). • Assets (Inventory) decrease by $1,000 (Credit) (recording cost of goods sold). • Equity (Retained Earnings) increases by $500 (Credit), representing the profit ($1,500 sales - $1,000 cost).
4. Customer1 deposits $500 in cash for future delivery of goods. • Assets (Cash) increase by $500 (Debit). • Liabilities (Unearned Revenue) increase by $500 (Credit). • Equity remains unchanged.
5. Customer1 transfers half of the future delivery of goods to Customer2. • No changes to assets, liabilities, or equity occur at this point. The merchant’s obligation to deliver goods (reflected as Unearned Revenue) is still $500 but now split between two customers (Customer1 and Customer2). Internal tracking of this obligation may be updated, but the total financial liability remains the same.
We are all spoiled by thinking of debit/credit as equal to decrease/increase respectively because that how we interpret our bank accounts. That understanding totally collides with formal accounting where debit/credit DON'T mean decrease/increase respectively. I think this is the root cause of all confusion about double-entry accounting. I may be wrong about this, happy to be corrected but that is the bit my brain grinds against when trying to make sense of things.
E.g. I replaced all instance of debit with "Left" and credit with "Right" in your example:
1. Merchant takes out a loan for $5,000 and receives $5,000 in cash. • Assets (Cash) increase by $5,000 (Left). • Liabilities (Loan Payable) increase by $5,000 (Right). • Equity remains unchanged.
2. Merchant buys inventory for $1,000 cash. • Assets (Cash) decrease by $1,000 (Right). • Assets (Inventory) increase by $1,000 (Left). • Total assets remain unchanged, and liabilities and equity are unaffected.
3. Merchant sells all inventory for $1,500 cash. • Assets (Cash) increase by $1,500 (Left). • Assets (Inventory) decrease by $1,000 (Right) (recording cost of goods sold). • Equity (Retained Earnings) increases by $500 (Right), representing the profit ($1,500 sales - $1,000 cost).
4. Customer1 deposits $500 in cash for future delivery of goods. • Assets (Cash) increase by $500 (Left). • Liabilities (Unearned Revenue) increase by $500 (Right). • Equity remains unchanged.
5. Customer1 transfers half of the future delivery of goods to Customer2. • No changes to assets, liabilities, or equity occur at this point. The merchant’s obligation to deliver goods (reflected as Unearned Revenue) is still $500 but now split between two customers (Customer1 and Customer2). Internal tracking of this obligation may be updated, but the total financial liability remains the same.
I find this much more easier to reason with.Any self-respecting accounting system should be able to produce a balance sheet that matches the conventions you’re describing. I don’t think it follows that the actual numbers in the database that get summed to produce the total liabilities should be positive.
And the answer is that "0" first entered Europe around the time they invented double-entry bookkeeping there. Negative numbers reached Europe centuries after that.
I showed the internals of a number-line-based accounting systems to an accountant once, and he was so confused by the negative incomes.
https://en.wikipedia.org/wiki/Negative_number#History
https://en.wikipedia.org/wiki/Double-entry_bookkeeping#Histo...
In the scenario four I presented earlier, I believe it is intuitive to think of unearned revenue (liability) as a positive number. When the customer picks up the order, the unearned revenue will be transferred to equity.
You basically used different labels for positive or negative amount in the example.
Destination is the key. You can't just arbitrarily change an account balance using a transaction in a ledger. There should be a destination and this the second record.
This what GAAP, IFRS and even all the Basels for banks describe in strict detail. But every accounting system and practice is based on double entry and not just keeps the balance sheet consistent but adds a meaning to every transaction using predefined types of accounts.
At least that's how it's been explained to me.
I also wrote a super short post on how to model such a system on postgres https://blog.nxos.io/A-simple-double-entry-ledger-with-sourc...
Blockchain actually kinda nails it, that's in essence a source/destination ledger, no 'postings' or similar needed, and from a balance calculation POV has been working pretty well
One reason this model isn't applied in accounting, in my personal view :), is simply historical and the fact that the number 0 didn't exist when accounting principles were created.
Wrote another post on how to model debit/credits on a source/destination ledger here: https://blog.nxos.io/Debit-and-Credits-on-a-Source-Destinati...
It's very straight-forward, you just have to accept that asset accounts have negative balances and present the absolute amount instead of a negative amount in a view.
The page contains a comment from Matheus Portela who pointed to a blogpost of his about "Double-Entry Bookkeeping as a Directed Graph" [0].
"I've also had the same problems you described here and double-entry bookkeeping is the way to go for financial accuracy. As a programmer, things clicked when I realized this system is an extended directed graph.". It turned out that: "Hi Matheus! Would you believe me if I told you that I read your post in preparation for this article?"
[0] https://matheusportela.com/double-entry-bookkeeping-as-a-dir...
In a bank ledger when a loan appears on a checking account, both increased. Loan on the left, checking on the right. DT Loan → CT Checking. Loan is an asset for a bank, Clients money is a liability.
On a client's balance sheet everything is mirrored. Checking is an asset, Loan is a liability.
Queries are quite simple. Volumes are problematic. In a big institution you would find several ledgers included in the general ledger by totals. You just don't need all the details in one place.
You talk more about how to make db data simultaneously a representation of final reports. I believe it's not related to this thread.
It is definitely possible to make each side of transaction a different record, but you have to link them, so there would be another ID for a transaction to group later. It is always there in any case, but you are either joining while getting a balance or grouping while reconstructing transactions for reports. So, it depends on the primary load: lots of transactions to register or lots of reports to provide. Both are used.
You have a single USD (or other) value - so in the simplest form it just looks like this:
From: Alice To: Bob Amount: 10 Currency: USD
And the balances are simply sum(transactions where account is receiver) - sum(transactions where account is sender)
You can have multiple transactions. One to pay tax, one to pay fees and one to pay the actual thing.
You bundle these things in another abstraction, eg. An invoice.
In singly-entry, it is the tuple (amount, account).
Every “double entry” accounting package I’ve ever used can easily handle transactions that are awkward in this schema and transactions that don’t fit at all.
Moving $1 from one current account to another? I guess you declare that the $1 needs to be a positive amount, but your two accounts have the same normal balance, and calling one a “debit account” is a bit awkward.
Adding an accounts payable entry that is split between two expense accounts? Not so easy.
I assumed it has to do with the fact that it was invented for bookkeeping by hand. Accountants assure me that it's even more important with computers, but I've never managed to figure out how that works
A single record can (and will) be lost. Network issue, db issue, etc., some transactions will not make it to the db.
With double entry you at least have a chance of reconciling the lost transaction.
That's the purpose. If you have a system with no redundancy, it's equally true that something will always eventually be wrong. But in that case, you'll have no way of knowing what's wrong.
With the redundancy, you can detect problems and often determine what happened.
And double entry bookkeeping should be both easy to explain (there are countless articles for it, precisely because it is a pretty easy concept) And easy to understand if you have ever tried to keep a ledger of transactions around and wanted to audit it for errors.
- Assets minus Liabilities = Equity (net worth)
- Your bank account or cash balance increases on the debit side
From this you can figure out that if you borrowed money, the debt increases on the credit side and the cash influx debits your bank account. The same goes for an income.
Sum of entries of assets/liabilities accounts = Equity. Moreover assets and liabilities become one type.
Clearly not. But this is why I let an accountant do it.
That's probably not the way I would have designed it. I'd probably have designed it from the point of view of the account, so that we'd all agree on what addition and subtraction mean. But that's my programmery point of view. I imagine that they're more concerned with the flows -- not just the numbers, but especially the actual materials being bought and sold.
Your bank account is really two accounts: an asset on your books, and a liability on the bank’s books.
When you talk about accounting for physical inventory, that’s a whole new can of worms.
The most popular way I see is this:
- you keep track of goods by their cost to you (not their value once sold)
- every time you buy item xyz, you increase an asset account (perhaps called “stock” and the transaction labeled “xyz”). You also keep track of the number of xyz. Say you buy 10 for $10 each, then another 10 for $20 each. Now you have 20 and your xyz is valued at $300. Average cost: $15
- every time you sell or lose some xyz, you adjust the number of xyz, and reduce the asset account by the average value of those items at the time of the transaction, or $15 in this example. The other account would be cost_of_goods_sold or stock_shrinkage.
Many other approaches also work.
Think about how you're going to do that with your concept. You will likely end up with something extremely close to what double entry accounting is after a few iterations
The accounting equation is: Assets = Equity + Liabilities.
For a transaction to be valid it needs to keep that equation in balance. Let's say we have two asset accounts A1, A2 and two Liability accounts L1, L2.
A1 + A2 = Equity + L1 + L2
And any of these sorts of transactions would keep it balanced:
(A1 + X) + (A2 - X) = Equity + L1 + L2 [0]
(A1 + X) + A2 = Equity + (L1 + X) + L2 [1]
(A1 - X) + A2 = Equity + (L1 - X) + L2 [2]
A1 + A2 = Equity + (L1 + X) + (L2 - X) [3]
Now, here is the key insight: "Debit" and "Credit" are defined so that a valid transaction consists of the pairing of a debit and credit regardless of whether the halves of the transaction are on the same side of the equation or not. It does this by having them change sign when moved to the other side.
More concretely, debit is positive for assets, credit is positive for liabilities. And then the four transaction examples above are:
[0]: debit X to A1; credit X to A2
[1]: debit X to A1; credit X to L1
[2]: credit X to A1; debit X to L1
[3]: credit X to L1; debit X to L2
You can debit and credit to any arbitrary accounts, and so long as the convention is followed and debits and credits are equal, the accounting equation will remain balanced.
Another way of looking like this is with parity. A transaction consists of an even parity part "debit" and an odd parity part "credit". Moving to the other side of the equation is an odd parity operation and so a credit on the RHS has double odd parity, which means it adds to those accounts (and debit, with odd parity, subtracts).
Consistence is a property of the backend, if that is wrong there is not hope for later
> Your ledger entries always sum to zero for each transaction, your income account has a negative balance, and you display it in a sensible manner.
'sensible manner' is the problem here. The data/money will be diverge with time, and without proper storage of the data it will by impossible to figure out.
The problem here is NOT store 'a transaction'. That with a RDBMs works. Is to store the FLOW of MANY transactions and the divergent ways things works.
Like, your bank is telling you has $100 and your system $120. And your system sum right, but the bank rules.
Or when you do a return and cents are lost in the interchanges and chargebacks.
Or, just wrong data entry, sync, import/export, etc.
---
The way to see this is that `double entry` is a variation of `inmutable data that don't mutate and always track the flow of it' that is golden for business apps.
I've seen the following major phases of this work: 1) Build the ledger (correctly), and it will work well for a while. 2) Add convenience code for the callers, assist finance in doing reports/journaling, fix some minor bugs, take care of the operational bits (keep the database up). 3) Reach the scaling limits of your initial approach, but there are some obvious (not trivial) things to do: re-implement the transaction creation directly in the database (10x perf gain), maybe sharding, maybe putting old tx into colder storage, etc.
This is spread out over a while, so I haven't seen it be a full-time job, even at real startup-level (+10% MoM) growth. Even if it was, that's one person, not a whole team. I understand engineers that instead are pulled towards projects where they are in higher demand.
In another comment somebody said ledger systems are trivial when done right and super hard when done wrong - so if you did a good job it kinda looks like you just created 3 tables and some code. That seems thankless, and job searching as this type of specialist is harder than just being a generalist.
There's a reason Stripe is as successful as it is. And then there's a world where a company outgrows Stripe.
There are worse career choices ("prompt engineer" LOL) than financial engineering.
stripe on bookkeeping: https://stripe.com/guides/atlas/bookkeeping-and-accounting
recently I discovered that in a medical billing context the system is way, way weirder than I had seen before. I shipped the three tables, but getting everything into it seems like it might be an endless battle
Here's an example of a core banking deposit transaction schema that has been extensively battle tested in many small & mid-size US institutions:
https://jackhenry.dev/open-enterprise-api-docs/operational-d...
You may note fields like "Effective Date" & "Affects Balance/Interest", which imply doing this correctly may involve exploring some interesting edge cases. Wouldn't it be cool if you could just cheat and start with an approach that already considers them?
It was like reading an engineering team saying their attempt to design a new lightweight mountain bike failed. It turned out saving weight by omitting brakes wasn't a good idea, and their attempt to fix it by riding beside each bike and manually slowing it down wasn't too popular either. Then they have the hubris to follow that up with, "you can avoid the same mistakes by reading what we have to say on bicycle design".
The lessons they can take away have very little to do with engineering or FinTech. I'd file it under doing a bit of research before committing to any major task. Basic accounting principles are centuries old now. They could have learnt about them by buying a high school text book, reading it cover to cover and use doing the exercises. It would have taken them less than a week.
Admittedly that only gives you the "how", not the "why". You realise much, much later that the engineering equivalent of double entry accounting is two systems design by different teams that are constantly comparing their outputs. By the time you've figured that out, it's caught so many errors you realise "holy shit, these systems are really easy to screw up, and are under constant attack by malicious actors".
There is a hidden trap at every step - floating point being imprecise, currency rounding not being reversible, tax calculations being one way, ACID being a hard requirement. I'm being this mob screwed up tax calculations. Floating point throwing out 1 in 100 million transactions was one of the joys they never got to experience.
The lesson for me here is that there is no substitute for knowing or caring about what you're doing. Dancing cents don't happen when eng cared about correctness in the first place. I feel like I just read a college freshman explaining how there's no possible way they could have passed their first exam. Better yet he's here to sell us on his thought leadership without ever explaining why he failed the first exam.
Worse, I've worked at where the transaction reference id from vendor is not recorded, it's lost so the past data cannot be reconciled!
in the UK, as an engineer, if I'd built this I would expect the regulator to come after me personally for not ensuring the system had adequate controls to protect clients money/investments
with a potentially unlimited fine + prison time
[1]: technically those performing a Certified Function
Here in the US, programmers like to call themselves Engineers, forcing everyone else to use the term "Professional Engineer" or "Licensed Engineer" or some other modifier before their title. I hate it, I wish they would stop, but it's not going to happen.
Software here is a wild, wild, West. The motto most live by is "move fast and break things"... even when those things are people's lives.
Railroad locomotive operators, ship engine operators.
The name precedes the creation of licensed tertiary education level engineers.
A lot of people seems to ignore the fact that licensed professions that require an accredited diploma in a tertiary level education program is a relatively recent feature of our societies.
The low level guys were just doing their jobs, and each individual transaction was mostly legal. A few weren't but it's hard to sort out which ones. Maybe the management should be responsible for ensuring that nobody ever did anything illegal, but they can't really supervise everything all the time, can they?
Poof. Guilt is just a rounding error that all rounds down to zero. The government passes some new regulations to prevent that particular scenario from happening again, and the same people set about finding a new scam.
Once you get to the top, the act of pulling up the ladders behind you is "just" self preservation.
there's a separate certification function for managing certification function employees, and they're jointly liable for anything they do
For certain regulated professions there is, if a building falls down due to a bad design the professional engineer (PE) that signed and sealed the plans can be held personally liable.
Overconfidence can quite literally be fatal.
Deliberately implementing a financial system that ignores established (and probably legally required) accounting practices? That's kind of like a structural engineer willfully disregarding the building code because that's what management asked for.
Also you say regulations in the UK have been changed recently.
I'm not aware of regulations that apply to software engineers.
But it’s entirely possible for someone who calls themselves an engineer to not actually be a certified engineer. So the activity wouldn’t be regulated because the person isn’t part of a professional body that regulates members.
In that case, lack of competence would be a civil issue unless it resulted in something criminal.
It is still possible in the UK and I assume EU (chartered engineer and the EU-alternative).
So the reason it isn’t a PE-discipline is uptake, not the work itself.
not your job title, or piece of paper that you have that says you're X, Y or Z
In practice you have virtual accounts like "cloud expenses" and "customer subscription" that only go up/down over time, to be the counter-party for transactions in/out of your company. So it's not impossible to mess up, but it eliminates a class of mistakes.
Entry2: Account B: sends 1000 to Account A
And from GP:
> A consequence of that is you can check that the sum of all accounts is always 0.
Entry1 + Entry2 = 1000 + -1000 = 0
Amusingly, when I made my own personal money tracking program over a decade ago for my bank accounts, this was how I implemented transfers between them just because it was simpler to do. Years later when I heard this name, I also had trouble understanding it because I assumed I had done it the bad way and banks did somehow did something more in-depth.
The legal entity or entities involved - if any - would be described in the linked commercial document that explains the transaction, and the chart of accounts that describes the meaning of each account code.
There is no requirement for a transaction to have exactly two entries. The term "double-entry" is slightly misleading; it is only trying to express that both sides of the accounting equation are equal. When recording a sale, for example, it is more likely that three or more entries are journaled, due to the additional coded entries for sales tax, and those produced for separate line items for each SKU, shipping etc.
A better phrase is "two-sided accounting", which is also used, but less commonly than "double-entry".
The total amount of the credits would equal the cash debit.
"For every debit there must be a credit"
or
"Every transaction has two sides"
In the case of moving money between regular bank accounts in the same institution, you regard that as a movement between two asset accounts, whilst the bank regards that as a movement between two liability accounts.
So their entries would have the same magnitude as yours, but inverted signs.
Which means you aren't even thinking of the "bad" single-entry version, which is what a lot of people here are stumbling over because apparently it's more natural: A "transfers" table with the columns FromAccount, ToAccount, Amount, were a single row represents both the "Entry" rows from mine above.
In practice most systems allow for more than two entries in a single transaction (ex: take from bank account, give to salary, give to taxes) but always at least two, and always adding up to 0.
> Ledgers are conceptually a data model, represented by three entities: Accounts, Entries and Transactions.
> Most people think of money in terms of what’s theirs, and Accounts are the representation of that point of view. They are the reason why engineers naturally gravitate towards the “simplicity” of single-entry systems. Accounts are both buckets of value, and a particular point of view of how its value changes over time.
> Entries represent the flow of funds between Accounts. Crucially, they are always an exchange of value. Therefore, they always come in pairs: an Entry represents one leg of the exchange.
> The way we ensure that Entries are paired correctly is with Transactions. Ledgers shouldn’t interact with Entries directly, but through the Transaction entity.
Edit: no offense but sibling comment is an example :P
Single-entry: each row of your DB stores {account, delta}.
With double-entry you are guaranteed via your schema that sum(debit delta) = sum(credit delta), that's it. Money is "conserved".
It's easy to denormalize the double-entry into a single-entry view.
That kleppman article talking about movements as edges of a DAG is the only place this is ever talked about clearly.
Once you pair this with another entity or company doing the same, it becomes very hard for money or goods to vanish without the possibility to track it down. Either your books are consistent (sum of "stock" + sum of "ingress" - sum of "egress" - sum of "waste" makes sense), or something is weird. Either your incoming or outgoing goods match the other entities incoming or outgoing records, or something is amiss in between.
This is more about anomaly detection, because paying a clerk can be cheaper than losing millions of dollars of material by people unloading it off of a truck en-route.
Double entry accounting is useful because it enables local reasoning. Let me explain why! If you've remembered those other explanations, you hopefully remember the fundamental equation of accounting:
assets - liabilities = shareholder equity
Well, you can also define this a bit more broadly as assets + Δassets - liabilities - Δliabilities = equity
In other words, changes in assets or liabilities must also balance. I sometimes think of these as "income" and "expense," probably because GNUcash has accounts of that type. If you rearrange that expanded equation you get (assets - liabilities) + (Δassets - Δliabilities) = equity
If the grouping on the left represents the state before a transaction, and the grouping on the right represents a single transaction, then we get a super useful auditing tool: as long as the book is balanced before any given transaction, the book will remain balanced after the transaction as long as the transaction itself is balanced. You can now reason about a narrow set of things instead of the whole book!In practice what this means is if your system is losing cents on every transaction as OP article states, each transaction should be flagged as imbalanced. From there it should be super easy to see why, since you can examine a single transaction.
To achieve this you need all entries / actions to have a unique transaction ID to group by, and a notion of which accounts are liabilities and which are assets, etc. As OP article mentions, there's a ton of implementation nuance.
There's an entire section on double-entry accounting in the article. The tl;dr is that if you take money out of an account, you need to place money in another account and vice versa. So, you have a table called "accounts receivable" which track money the company is owed. If you actually get paid, you remove money from "accounts receivable" and add money to the "cash" account, instead of just increasing the amount of cash you have.
It makes it much more difficult to lose track of money or have it stolen. In a single-entry system, I could receive money from a customer for services owed and just keep it for myself.
https://github.com/adamcharnock/django-hordak
I created and maintain this, along with a couple of others. It is built for Django (so great if you're using Django), but extracting the schema wouldn't be too hard. It also has MySQL support, but the integrity checks are more limited.
(Side note: I'm a freelancer and available!)
I gave a boorish lecture to a junior accountant recently...
When you make things up yourself it is just you and Excel. When you use double entry you have 100s of years of history to fall back on. https://en.m.wikipedia.org/wiki/Double-entry_bookkeeping
More experienced founders would take a look at it and immediately nope out of it, and so not achieve success (after a grueling amount of work).
There is no in-between.
Martin fowler wrote quite a bit on the subject and it's a good match for event-sourcing.
Relatedly, some months ago I asked how to correctly record food in a double-entry bookkeeping system [1] and it triggered a 1300-words, 8-levels-deep discussion. We should remember more often than accounting is a degree by itself.
I kept digging and digging on a "sell some lemonade for $5" example, and ended up at:
- $5 debit to cash (asset => debit means +5)
- $5 credit to revenue (equity => credit means + 5)
- $X debit to cost of goods sold (liability => debit means - X)
- $X credit to inventory (asset => credits mean - X)
A double-entry for the money, and a double-entry for the inventory, for a total of 4 entries.It's too complicated for me. I'd model it as a Sale{lemonade:1,price:$5} and be done with it. Nothing sums to zero and there's no "Equity + Income + Liabilities = Assets + Expenses" in my version.
But this is HN, and I think a lot of people would call my way of doing things "double" because it has both the lemonade and the money in it. So when I say I'm not sold on doing actual double-entry [https://news.ycombinator.com/item?id=42270721] I get sweet down-votes.
but every startup these days wants to become a marketplace where you are facilitating multiple third-party lemonade vendors and taking a cut for letting them use your platform. In that case, the flow of money quickly gets too hard to understand unless you have double-entry
Accounting predates computer by hundreds of years, there are better way to do certain thing of course but we must follow convention here because that's the norm and everyone understood
But that is an interpretation made by the viewer. A customer typically is an asset account, whose balances are in the debit column. But if we somehow owe them money because let's say they paid us an advance, then their balance should be in the credit column. The accounting system need not bother with what the "right" place for each account is.
It is quite practical to have only a simple amount column rather than separate debit/credit columns in a database for journal entries. As long as we follow a consistent pattern in mapping user input (debit = positive, credit = negative) into the underlying tables, and the same when rendering accounting statements back, it would remain consistent and correct.
Another benefit of Credit / Debit side on double-entry bookkeeping is you need to balance both side in a single transaction. Say if the user account 2003201 is in Credit and it got an addition of 1000 value, a same value need to be added on Debit side. If it's (1) a cash topup, then 1000 value need to be added to Cash account (let's say 1001001) on Debit side. Otherwise if it's a transfer (2) from another user account 203235, then the account need to be Debited 1000 value as well.
It's Asset = Liabilities + Equity, while the left equation is Debit (which increase value when a Debit transaction happen, and the right equation is Credit, which increase when a Credit transaction happen. In (1) case, the cash account increase since it's on Debit account, while in (2) case, the user account decrease because it's a debit transaction on Credit account.
I think it is useful to think about double-entry book-keeping in two layers. One is the base primitive of the journal - where each transaction has a set of debits and credits to different accounts, which all total to 0.
Then above that there is the chart of accounts, and how real-world transactions are modelled. For an engineer, to build the base primitive, we only need a simple schema for accounts and transactions. You can use either amount (+/-ve), or debit/credit for each line item.
Then if you're building the application layer which creates entries, like your top-up example, then you also need to know how to _structure_ those entries. If you have a transfer between two customer accounts, then you debit the one who's receiving the money (because assets are marked on the debit side) and credit the other (because liabilities are on the credit side). If you receive payment, then cash is debited (due to assets), and the income account is credited (because income balances are on the credit side).
However, all of this has nothing to do with how we structure the fundamental primitive of the journalling system. It is just a list of accounts, and then a list of transactions, where each transaction has a set of accounts that get either debited/credit, with the sum of the entire transaction coming to 0. That's it -- that constraint is all there is to double-entry book-keeping from a schema point.
One that's not referenced in this article and compile all of them is: https://github.com/kdeldycke/awesome-billing#readme
https://en.wikipedia.org/wiki/Resources%2C_Events%2C_Agents
You can see that this model has all of the features discussed in the article, and then some, and REA events map naturally to something like event sourcing. You can project a REA dataset into a double entry ledger, but you often can't go the other way around.
“At the heart of each REA model there is usually a pair of events, linked by an exchange relationship, typically referred to as the "duality" relation. One of these events usually represents a resource being given away or lost, while the other represents a resource being received or gained.”
This is what I see as the main difference between single book accounting and double book accounting, with REA having some OO things added to the model to more accurately represent business objects when a computer is used. What am I missing about REA that makes it better than double book as implemented in the way this post was talking about implementing it?
You could argue that when representing both DEB and REA in an entity-relational model, they might have some similar looking tuples and relations, but that does not entail that they have the same data model. As I said in my initial post, REA is a richer data model that captures more information. You can reproduce the ledgers of DEB from a REA system, but you cannot go the other way in all cases.
DEB is a fictitious abstract model. "Accounts" and "ledgers" aren't real, they are fictions, artifacts of a model we use to indirectly track events. DEB doesn't even have a notion of economic actors that take part in an exchange. As such, it breaks down with multiparty transactions, for instance. DEB can of course be extended to handle such notions, but it's no longer just DEB and starts encoding something more like REA, just within a less meaningful foundation, eg. there is no such thing as a "normal balance" because this need results from a fictitious accounting model.
The article also mixes concerns that are not actually part of accounting but of different ontologies, eg. pending->discarded|posted is recording what may AND did happen, accounting is only supposed to record what actually happened. Which isn't to say that tracking what may happen isn't necessary, but mixing it into your accounting records is dubious, and simply muddies what's supposed to be a simple and reliable system that records what actually happened.
Just look at the sample REA pattern involving a cashier, customer and sales person. The information that this exchange involves 3 parties is not even captured in DEB. This is why I said you can reconstruct DEB from REA because REA is richer, but not the other way around.
The tech founder should know their shit prior to building it so that time and runway and deadlines are no excuse. Really if you are doing fintech you should have employment experience and understand all the operations well.
Otherwise they are no better than say home builders who mismanage and go bankrupt. Or less geneously, they are conmen.
I didn't understand this part, can someone give examples of good and bad approaches for both credit and debit accounts?
Perhaps popular culture should review the use of nosql databases, and then spending tremendous effort to try and make it into a relational database, while nosql databases can be setup to be eventually consistent.
Money, and accurate accounting don't work too well when the inputs to a calculation of a balance are eventually consistent.
So much work is avoided at times it seems to not learn SQL that it can rival or end up being more work once NOSQL leads you down the path of inevitable relational needs.
In addition to this, maybe it's time to build honeypot ledgers with a prize and reward in it for anyone who can hack or undermine it, similar to vulnerability bounties. Run for long enough, it would determine at least that one side of security mistakes in startups being reduced. Run a sprint, with prizes and let them run like daily deal fantasy and watch things harden.
transactions: [
(txID, timestamp, [
(accountID, delta, otherInfo),
...
], reason),
...
]
accounts: [
(accountID, routingNumber, accountNumber, ownerID),
...
]
Crucially, notice that "accounts" don't track balances here, and so I'm asking: why would I need TWO entries per transaction, and why would I need to do my own tracking of the balance of every account, when I can just keep a single entry per transaction in a proper ACID database and then build any view I need on top (such as running balances) with a battle-tested projection mechanism (like a SQL View) so that I still track a single source of truth?calling it “double” might be misleading - in the ledger systems I’ve worked on, a transaction that has only two entries - a single source and a single destination - is actually rare. In practice, a payment tends to have a source, the intended destination, a secondary destination for platform fees, possibly another destination for taxes.
And much more complicated situations are possible - I currently work in medical billing where there are multiple virtual internal accounts involved in each transaction for different kinds of revenue (including, for example, money we billed for but don’t expect will actually get paid)
so a transaction becomes a bundle of a set of several debits or credits that happen simultaneously and sum to zero. if you have that, you have double-entry bookkeeping, even if you schema puts it all in one table
It's funny how many commentators here confuse debit/credit with double-entry.
Double entry is irrelevant, here.
I love the "use crypto" to fix the problem suggestion. LOL!
It's kind of poignant since crypto has given me a fantastic club to beat executives over the head with whenever they want to do stupid handling of money. Since crypto has so many decimal places, you break dumbass floating point handling of money immediately and irretrievably for way more than just "a couple cents". Your CFO starts jumping up and down really excitedly about handling money properly when he has to worry about a rounding error forcing him to compensate a Bitcoin price jump/crash.
At a fintech startup I was working with, we built a "shadow ledger" because we couldn't trust a 3rd party PP to give us the correct information, which would otherwise allow double spending on accounts.
We tried 3 different (major!) PPs - they ALL had similar flaws.
It's the same with computer systems. The charts show something, but until enough people decide to all withdraw their money or sell their stock at the same time, nobody has any idea that the money or asset simply isn't there or nobody knows just how frothy the valuation is.
Social media and search algorithms are highly optimized to ensure that people don't sell or withdraw stuff at the same time. Modern media directs massive attention towards certain topics as a way to draw attention away from other topics which could collapse the economy.
Also, imagine a bank has a serious bug which causes millions or billions of dollars to disappear every year or creates extra illegitimate dollars. Imagine they only discover this bug after a few years of operation... How likely is it that they will report it to an authority? They didn't notice it for years, why not pretend they didn't notice it for a few MORE years... The incentive to delay the reckoning is an extremely powerful one.
I literally build the same system and ask for HN Ask but no one answer it and already using double entry and ledger based on some research and AI advice lol
I implement all the check its mention in your page and quite satisfied using postgress constraint and trigger function to detect abnormality before bad row is inserted
but the problem now is populate the database using fake data because how rigirous my constraint and rule are, I need to simulate real world use case and duplicate it on development machine but this are hard
if you ever read this, can you give some advice please, because simply looping that function is not gonna work because constraint of double entry and ledger
Sometimes to 2 digits. Sometimes 3. Or 4. How many significant digits are there even in fractional SPY holdings? You just resort to looking in all the places (statements, transaction history, overview dashboard, …) and going with the one that shows the most digits, and assume that's all of them.
Big and small companies do this. And when I've reported it, they don't care. They don't see the problem.
I did not like finance much, but building a fintech system really teaches a lot, not only from technology perspective but from managing stakeholders, processes, compliance, dealing with all kinds of finance-specific issues.
One should always work in fintech, at some point of time in career.
1) That being able to escape characters in CSV files is kind of important. (Some customers noticed that a single quote in a transaction description would make the importer silently fail to import the rest of the transactions... :S ), and
2) Why it wasn't good to store all dollar values as doubles. I'm reasonably sure I quoted Superman 3 at some point during this discussion.
I pushed for offsetting entries instead, but was overruled. It was a _nightmare_.
Thankfully, it is not $previousjob.
I want to see if I have time to finish this article before I have to leave. And now I have to waste time copy-pasting the contents into a text file, just to see where I am.
Edit: Actually, it's invisible only in Firefox. Chrome shows the scrollbar still. So this may be a bug in the author's CSS or something.
Double-Entry vs. Single-Entry Systems: The article emphasises the superiority of double-entry accounting over single-entry systems. Double-entry ensures that all financial transactions are recorded with both a source and destination, providing clearer insights into the flow of money, reducing errors, and making debugging easier.
Data Model Structure: A well-designed ledger system should treat money tracking as a separate data model, consisting of accounts, entries, and transactions. This structure enables easier reporting and a better understanding of what drives financial changes.
Need for Context: Building ledgers is not just about coding; it requires a solid understanding of accounting principles and the specific context in which a business operates. The author advocates for engineers to grasp the nuances of financial systems to avoid pitfalls in their implementations.
These insights serve as a valuable guide for engineers and startups venturing into fintech, highlighting how to approach ledger design and the critical importance of maintaining financial accuracy.
It appears the startup in question just never even did step one. A financial system that is imprecise does not work.
Additionally that mantra is typically applied at the story/issue level in my experience as in:
1. Get the tests passing 2. Get the code clean 3. Identify scaling bottlenecks and remedy if necessary.
Entry(account, direction, non-negative amount), direction is debit or credit.
vs
Entry(account, signed amount), + is debit, - is credit (for example).
It's a two way mapping and should be equivalent. Unless debit or credit amount could be negative. But as I understand it's a big NO-NO in accounting.
haha, if only.
move fast, break everything if it gives profit
The last fintech I worked for had a joke about how you weren't really an employee until your code had lost some money.
Sure there was. A financial services company wanted to replace their repayment plan generator that ran on an aging AS/400 with someting running in .Net on Windows.
I dug in an learned all about time value of money, numerical formats and precision, rounding strategies, day counting strategies for incomplete periods (did you know there are many dozens of those) etc.
I made everyting as configurable as I could so we had the option to offer to other financial service clients by just setting up a different profile.
Since I had no interaction with the client before the first presentation meeting, I put in what to me felt like the most plausible config (I had managed projects in the domain before so I was not completely clueless to the mindset).
We had the meeting. I showed a few calculated plans which they compared on the spot to the AS/400 output. I deployed on a test VM for them so they could do more extensive testing. Code was accepted with 0 change requests and put into production shortly thereafter. Don't think they ever changed from the default settings.
We routinely re-discover stuff that probably was already solved by a quiet lady writing a CICS transaction in a s-360 system in 1969.
Yeah, dealing with money and especially others people money without double entry bookkeeping is a bad practice.
But this particular problem is the consequence of the choice of using floating point binary math to deal with monetary quantities.
Given the fact that most modern languages have very awkward support for arbitrary precision decimal numbers the most sensible way to deal with money usually boils down to store it as an integer.
Or are we talking about random scammers as 'fintech' now rather than banks (the fintech of which might indeed be old enough to be still running on COBOL-like systems) ?
Starts in the education and perpetrates via hiring, blogosphere and programmer celebrities.
PG's rounding can be summed up as: we round to the nearest digit you specify, but how we get there is mostly undefined.
Every bank and organization you do business with(especially if you cross-jurisdictions) will likely round pennies differently. PG and the decimal data type(which is actually the numeric data type) are only part of the solution, as they won't handle rounding pennies correctly for you, but they will store it, if you round properly yourself.
PG also has a money data type, but it also doesn't let you specify the rounding rules, so you have to be careful. Also, money is tied to lc_monetary, which is handled outside of PG's control, so you have to be very careful with it, when moving data across PG instances, or you might be surprised. Also, it doesn't let you specify which currency this money is, which means you have to store that yourself, if you ever have more than one currency to care about.
If you don't care about balancing to the penny with your external organizations(banks, CC processors, etc), you are 100% guaranteed to have a bad time eventually.
But apart from the technical points, another interesting thing in the article is its introduction and what it says there about money as a concept. Yes, money is debt, both philosophically and technically it is the right way to think about it. And I believe that's something the crypto-assets industry and enthusiasts as a whole fundamentally gets wrong (mostly because of the libertarian political point of view blockchain tech has been designed with).
If a new transaction enters the system now, I could follow the advice and record it as two sources of truth. Or I could just record it once.
If I could turn a single transaction into two ledger entries today, I could do the same later if I needed to.
double entry is actually the simplest projection of any possible billing system because anything that moves money can be described as a bundle of credits and debits that happen simultaneously and sum to zero. so you get accounts for taxes collected and fees charged automatically and don’t have to write custom logic for each thing
This means when something goes wrong, you have a chance to figure out what in the world happened.
That's it. If you want to think of it as a state machine, you are storing all the state that happened with the resource you are tracking(usually money). This way when some part of the resource is missing, you can go figure out what happened.
Think about it another way: always record enough of the transaction, that if you were required to sit in a courtroom in front of a jury and explain what happened, you could do so. If you can't do that, you didn't store enough information.
You can't imagine simpler than what you described because it's single entry.
Double-entry is twice as complicated, makes sense only to accountants and not to computing people. Your example of 1 transaction would be doubly-kept as some nonsense like
BucketA revenue:$x CR cash:$x CR
BucketB revenue:$x DB cash:$x DB
Rounding in particular is a truly endless source of trouble and has caused me to chase after a lot of cents. Dividing up a large payment into multiple installments is the major cause of rounding in my use case. Life starts to suck the second things fail to be evenly divisible. I created an account to track gains and losses due to rounding and over time and it's adding up to quite the chunk of change.
Hilariously, the payment systems would charge me incorrect rounded up amounts and then they would refund the difference to my credit card at some undefined time in the future. Tracking and correlating all these seemingly random one or two cent transactions has got to be one of the most annoying activities I've ever learned to put up with. Not only do I have to figure out why things aren't quite adding up, I have to patch things up in the future when they fix the mistake.
Why can't these things just charge the correct amounts? For example, imagine splitting up $55.53 into four installments. That's 4x$13.8825. They could charge me 3x$13.88 + 1x$14.89. Instead they round it up to $55.56 and charge me 4x$13.89, then maybe they refund me $0.03 some unknown day in the future. It's like the systems go out of their way to be as annoying as possible. Some systems do this silly single cent refund dance in my credit card statement even though they print the exact same 3xN + 1xN+0.01 solution in the receipt. Makes absolutely no sense to me.
It's getting to the point I'm trying to avoid this nonsense by structuring purchases so the final price is evenly divisible by some common factors. I never liked the .99 cents manipulation trick but I seriously hate it now.
Now let's say your price is "0.00023" per unit and someone uses "213.34" units. Can you imagine it now?
The main idea behind double ledger accounting is that if you add a long list of the same numbers twice (as totally independent operations), then if you have added both lists correctly, you the two independent results will be the same. If you made at least one mistake in adding either or both of the lists, then it is possible, but unlikely that the results will match.
It's easy to think that computers don't make mistakes like humans when adding numbers, however floating point addition for long sequences of numbers is non deterministic (and error prone if you don't sort the numbers first and start with the small ones).
While double ledger systems won't fix this problem, they will identify if you have a problem with your addition (particularly a non-deterministic one like you would find with floating point addition) when you go to reconcile your books and find that the numbers across the various accounts don't add up.
Double entry is (confusingly) not about recording it twice, it's about using a transaction model where the state of N accounts has to be changed in compensating directions within a transaction for it to be valid, N being >= 2.
So depending on how your transaction schema is defined, a double-entry transaction can be written without ever even repeating the amount, e.g.
{"debit": "cash:main", "credit": "giftcards:1234", "amount": 100}
Making it effectively impossible to represent invalid state change. Things get trickier when N > 2 as classical double-entry tend to _not_ directly relate tuples of accounts directly to one-another and instead relying on a aggregated balancing of the changes to the N accounts at the transaction level, though YMMV between different ledgering systems.I'd be quoted a day-rate. That was what I was actually going to get paid, one day. But then I'd be told to bill it as an hourly rate. And then actually to bill it as 7.5 hours.
But I wasn't told what the hourly was - the hourly was whatever my day rate was, divided by 7.5. So this led to the problem that it produced an irrational number as a result.
Technically this should've been fine...except no one I dealt with knew or cared about this concept - they all used Excel. So if I rounded the irrational to nearest upper cent (since that's the smallest unit which could be paid) they complained it didn't add up. If I added a "correction item" to track summing up partial cents, they complained it wasn't part of the hourly.
In the end I just send python decimal.Decimal to maximum precision, flowed through invoices with like 8 digits of precision on the hourly rate, and this seemed to make Excel happy enough. Of course it was completely useless for tracking purposes - i.e. no one would ever be able to pay or unpay 0.666666666667 cents.
Because what's not in employment contracts that really should be? Any discussion on how numbers are to be rounded in the event of uneven division. You just get to sort of guess what accounting may or may not be doing. In my case of course it didn't matter - no one was ever going to hold me to anything other the day rate, just for some reason they wanted to input tiny fractions of a cent which they actually couldn't track.
And it's not an idle problem either: i.e. in the case of rounding when it comes to wages, should it be against the employee? It's fractions of a cent in practice, but we're not going to define it at all?
Your issue is just people being lazy and forcing a day rate into an hourly employment system.
The only way for this to be true is if your day rate was irrational to begin with.
If the price is "0.00023" per unit and someone uses "213.34" units I feed those strings into a multiplication function that returns the correct string 100% of the time.
That much i understand. I dont get how that category of problems is addressed by the solution described.
What i also understand is that you inevtably get to deal with accountants or other finance people. Working in a format they understand is not optional. They will have you transform whatever you have into that anyway.
I learn not to wonder why but maybe i should.
But you're not coming up with a valid monetary amount.
> Prerequisites: TigerBeetle makes use of certain fairly new technologies, such as io_uring or advanced CPU instructions for cryptography. As such, it requires a fairly modern kernel (≥ 5.6) and CPU. While at the moment only Linux is supported for production deployments, TigerBeetle also works on Windows and MacOS.
wha— what? why?? they must be solving some kind of scaling problem that I have never seen
I’m trying to get my head around how to build a fairly complex ledger system (for managing the cost schedules in large apartment buildings where everyone might pay a different proportion and groups of apartments contribute towards differing collections of costs) and you’ve just massively accelerated my thinking. And possibly given me an immediate solution.
Have you used tigerbeetle in production?
while i'd generally agree with this, in the case of TigerBeetle i think it's safe to trust their tech based on their test-suite/practices - one of the most impressive things I've seen in the past 5 years.
they extend the testing practices of FoundationDB (now spun out into Antithesis[0]), going a layer deeper to test for data integrity when there is disk io corruption. check out from ~20:30 in this demo:
Nope. Double entry bookkeeping means every transaction is recorded in (at least) two accounts.
For example, you receive a sales order for 1000€ of widgets, which cost you 600€. You ship them. You invoice for 1000€.
No money moved (hopefully you do get paid at some point though). However, you need to do some bookkeeping.
On the left side of the ledger (debits): Accounts receivable 1000€. Cost of goods sold 600€.
On the right side of the ledger (credits): Revenue goes up by 1000€. Inventory goes down by 600€.
These completely match. No money has moved, but the books are now up to date and balance.
Any transaction that does not balance should be rejected.
Yeah it's a real pain to get started because you need to understand the core concepts first then fight to balance the transactions ("compile errors") before you have anything useful. And when your results are wrong, you know that at least it's not because of the basic stuff.
Every engineer in London is laughing at this right now.
Wow double entry! Ledgers are hard! Wow. So true.
is this not the literal plot of Office Space? did you check for a Michael Bolton employee?
See, if you're an engineer (from a web background) you might think that using a regular backend language with a relational DB would be fine. But this service is typically optimized for concurrency. That might sound amazing. But what you really want is a simple queue. This ensures everything is applied in order and makes it impossible to accidentally have transactions execute on stale state. Plus, a queue can be executed amazingly fast -- we're talking Nasdaq scales. If you use a standard DB you'll end up doing many horrible broken hacks to simulate what would be trivial engineering had you used the right design from the start.
You've got other things to worry about, too. Financial code needs to use integer math for everything. Floating point and 'decimals' lead to unexpected results. The biggest complexity with running large-scale blockchain services is having to protect 'hot wallets.' Whenever an exchange or payment system is hacked the target is always the funds that sit on the server. When the industry began there were still many ways to protect the security of these hot wallets. The trouble is: it required effort to implement and these protocols weren't widely known. So you would get drive-by exchanges that handled millions of dollars with private keys sitting on servers ready to be stolen...
Today there are many improvements for security. New cryptographic constructs like threshold ECDSA, hardware wallets, hardware key management (like enclaves), smart contract systems (decentralized secret sharing... and even multi-sig can go a long way), and designs that are made to be decentralized that remove the need for a centralized deposit system (still needs some level of centralization when trading to a stable coin but its better than nothing.)
I would say the era where people 'build' ledgers though is kind of over. To me it appears that we're organizing around a super-node structure where all the large 'apps' handle their own changes off-chain (or alternatively based on regular trust.) The bottom layer will still support payments but it will be used less often. With more transaction activity happening on layers above. I think its still important to scale the chain and make it as secure as possible. Bitcoin has a unique focus here on security above everything else. Making it ideal for long-term hedges. For every day stuff I can't think of any chains that have more credible R & D than Ethereum. They seem to even have been doing research on the P2P layer now which traditionally no one has cared about.
Decimals are integers. There's no difference between integer math and decimal math, only in the meaning of the bit pattern afterwards.
No, most decimal types are decimal floating-point. (So, sure, technically redundant, but a lot of people say "floating point" to mean IEEE 754 specifically, even if that's technically incorrect).
Doing it the right way doesn't take any longer than doing it the shitty way. Successful startups focus on extreme minimalism, not focus on doing the worst possible job.
Frankly I'm not even convinced that double-entry is the sole right answer in this space. There are things you need to be able to represent, but the original reasons for doing double-entry (making errors when subtracting figures) no longer apply.
(I've worked at investment banks and fintech startups)
I think this tends to get misrepresented by people trying to literally keep two entries like we were working with pen and paper book-keeping though.
Because if I have a simple list of transactions, I can easily use a SQL query to recreate a view of double-entry book-keeping. It's perfectly fine to just record the amount of money, and two entity names in a table.
Coz the whole point of the system is that you can explain where the money went at any given time, referenced against something which should be independently auditable: i.e. a vendor receipt which in turn says "there's a physical item of X we gave you".
The "double entry" system's actual merit is when there's multiple people keeping the books. i.e. if you're in a business and your department is spending money on goods, then when you're doing that those transactions should be getting reflected in the shipping departments books (owned by a different person) and maybe your accounting department or the like. The point is that there have to be actual independent people involved since it makes fraud and mistakes hard - if you say you bought a bunch of stuff, but no one else in the business received it or the money to pay for it, then hey, you now need to explain where the money actually is (i.e. not in your pocket).
If you're starting a bank or need a ledger these days (and aren't using a core banking provider that has one), then i usually recommend Tiger Beetle.