* It's obvious from the schema: If there's a `deleted_at` column, I know how to query the table correctly (vs thinking rows aren't DELETEd, or knowing where to look in another table)
* One way to do things: Analytics queries, admin pages, it all can look at the same set of data, vs having separate handling for historical data.
* DELETEs are likely fairly rare by volume for many use cases
* I haven't found soft-deleted rows to be a big performance issue. Intuitively this should be true, since queries should be O log(N)
* Undoing is really easy, because all the relationships stay in place, vs data already being moved elsewhere (In practice, I haven't found much need for this kind of undo).
In most cases, I've really enjoyed going even further and making rows fully immutable, using a new row to handle updates. This makes it really easy to reference historical data.
If I was doing the logging approach described in the article, I'd use database triggers that keep a copy of every INSERT/UPDATE/DELETEd row in a duplicate table. This way it all stays in the same database—easy to query and replicate elsewhere.
All your other points make sense, given this assumption.
I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
> Undoing is really easy
Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.
In short, there are cases when soft-deletion works well, and is a good approach. In other cases it does not, and is not. Analysis is needed before adopting it.
That said, we've had soft-deletes and during discussions of keeping it on one argument was that it was really only a half-assed measure (data lost due to updates rather than deletes aren't really saved)
I think we largely need support for "soft deletes" to be baked into SQL or its dialects directly and treated as something transparent (selecting soft deleted rows = special case, regular selects skip those rows; support for changing regular DELETE statements into doing soft deletes under the hood).
https://news.ycombinator.com/item?id=43781109
https://news.ycombinator.com/item?id=41272903
And then make dynamically sharding data by deleted/not deleted really easy to configure.
You soft deleted a few rows? They get moved to another DB instance, an archive/bin of sorts. Normal queries wouldn't even consider it, only when you explicitly try to select soft deleted rows would it be reached out to.
Depending on your use-case, having soft-deletes doesn't mean you can't clean out old deleted data anyway. You may want a process that grabs all data soft-deleted X years ago and just hard-delete it.
> Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.
Yes but this is no more complex than the current situation, where you have to always create the audit records.
At that point you should probably investigate partitioning or data warehousing.
I've seen a number of apps that require audit histories work on a basis where they are archived at a particular time, and that's when the deletes occurred and indexes fully rebuilt. This is typically scheduled during the least busy time of the year as it's rather IO intensive.
FWIW, no "indexes fully rebuilt" upon "actual deletion" or anything like that. The regular tables were always just "current" tables. History was kept in archive tables that were always up-to-date via triggers. Essentially, current tables never suffered any performance issues and history was available whenever needed. If history access was needed for extensive querying, read replicas were able to provide this without any cost to the main database but if something required "up to the second" consistency, the historic tables were available on the main database of course with good performance (as you can tell from the timelines, this was pre-SSDs, so multi-path I/O over fibre was what they had at the time I worked with it with automatic hot-spare failover between database hosts - no clouds of any kind in sight). Replication was done through replicating the actual SQL queries modifying the data on each replica (multiple read replicas across the world) vs. replicating the data itself. Much speedier, so that the application itself was able to use read replicas around the globe, without requiring multi-master for consistency. Weekends used to "diff" in order to ensure there were no inconsistencies for whatever reason (as applying the modifying SQL queries to each replica does of course have the potential to have the data go out of sync - theoretically).
Gee, I'm old, lol!
(Again, a lot is O(log n) right?)
I think one of our problems is getting users to delete stuff they don’t need anymore.
I'm pretty sure it is possible, and it might even yield some performance improvements.
That way you wouldn't have to worry about deleted items impacting performance too much.
But HOT updates are a thing, too.
Memory >>>>> Disk in importance.
I usually tell people to stop treating databases like firebase and wax on/wax off records and fields willy nilly. You need to treat the database as the store of your business process. And your business processes demand retention of all requests. You need to keep the request to soft delete a record. You need to keep a request to undelete a record.
Too much crap in the database, you need to create a field saying this record will be archived off by this date. On that date, you move that record off into another table or file that is only accessible to admins. And yes, you need to keep a record of that archival as well. Too much gunk in your request logs? Well then you need to create an archive process for that as well.
These principles are nothing new. They are in line with “Generally Accepted Record Keeping Principles” which are US oriented. Other countries have similar standards.
We have soft_deleted as boolean which excludes data from all queries and last_updated which a particular query can use if it needs to.
If over 50% of your data is soft deleted then it's more like historical data for archiving purposes and yes, you need to move it somewhere else. But then maybe you shouldn't use soft delete for it but a separate "archive" procedure?
One reason is that you might want to know when it was last updated before it was deleted.
There are many legitimate reasons to delete data. The decision to retain data forever should not be taken lightly.
The problems are mostly the same as with soft delete: valid_to is more or less the same as deleted_at, which we probably need anyway to mark a record as deleted instead of simply updated. Furthermore, there are way more records in the db. And what about the primary key? Maybe those extra records go to an history table to keep the current table slim and with a unique primary key which is not augmented by some artificial extra key. There are a number of possible designs.
Soft-delete is a common enough ask that it's probably worth putting the best CS/database minds to developing some OOTB feature.
In my experience, usually along the lines of "what was the state of the world?" (valid-time as-of query) instead of "what was the state of the database?" (system-time as-of query).
Storage is cheap. Never delete data.
We've switched to CDC using Postgres which emits into another (non-replicated) write-optimized table. The replication connection maintains a 'subject' variable to provide audit context for each INSERT/UPDATE/DELETE. So far, CDC has worked very well for us in this manner (Elixir / Postgrex).
I do think soft-deletes have their place in this world, maybe for user-facing "restore deleted" features. I don't think compliance or audit trails are the right place for them however.
Aside, another idea that I've kicked forward for event driven databases is to just use a database like sqlite and copy/wipe the whole thing as necessary after an event or the work that's related to that database. For example, all validation/chain of custody info for ballot signatures... there's not much point in having it all online or active, or even mixed in with other ballot initiatives and the schema can change with the app as needed for new events. Just copy that file, and you have that archive. Compress the file even and just have it hard archived and backed up if needed.
The data archive serialized the schema of the deleted object representative the schema in that point in time.
But fast-forward some schema changes, now your system has to migrate the archived objects to the current schema?
Of course, as always, it depends on the system and how the archive is used. That's just my experience. I can imagine that if there are more tools or features built around the archive, the situation might be different.
I think maintaining schema changes and migrations on archived objects can be tricky in its own ways, even kept in the live tables with an 'archived_at' column, especially when objects span multiple tables with relationships. I've worked on migrations where really old archived objects just didn't make sense anymore in the new data model, and figuring out a safe migration became a difficult, error-prone project.
Some more rules to keep it under control:
Partition table has to be append-only. Duh.
Recovering from a delete needs to be done in the application layer. The archive is meant to be a historical record, not an operational data store. Also by the time you need to recover something, the world may have changed. The application can validate that restoring this data still makes sense.
If you need to handle updates, treat them as soft deletes on the source table. The trigger captures both the old state (before update) and continues normally. Your application can then reconstruct the timeline by ordering archive records by timestamp.
Needless to say, make sure your trigger fires BEFORE the operation, not AFTER. You want to capture the row state before it's gone. And keep the trigger logic dead simple as any complexity there will bite you during high-traffic periods.
For the partition strategy, I've found monthly partitions work well for most use cases. Yearly if your volume is low, daily if you're in write-heavy territory. The key is making sure your common queries (usually "show me history for entity X" or "what changed between dates Y and Z") align with your partition boundaries.
“Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product. A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.
Users generally don’t even know what a database record is. There is no reason that engineers should limit their discussions of implementation details to terms a user might use.
> “Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product.
Users might say they want “delete”, but then also “undo”, and suddenly we’re talking about soft delete semantics.
> A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.
None of these are terms an end user would use.
I've worked for a company where some users managed very personal informations on behalf of other users, like, sometimes, very intimate data and I always fought product on soft deletion.
Users are adults, and when part of their job is being careful with the data _they_ manage and _they_ are legally responsible for, I don't feel like the software owes them anything else than a clear information about what is going to happen when they click to "CONFIRM DELETION".
"Archive" is a good pattern for those use cases. It's what have been used for decades for OS "Recycle Bin". Why not call it Delete if you really want to but in this case, bring a user facing "Recycle Bin" interface and be clear than anything x days old will be permanently deleted.
"Undo" may work as shorthand for "whatever the best reversing actions happen to be", but as any system grows it stops being simple.
I read a blog about a technical topic aimed at engineers, not customers.
It's fairly common in some industries to get support requests to recover lost data.
And perf problems are only speculative until you actually have them. Premature optimization and all that.
This works well especially in cases where you don’t want to waste CPU/memory scanning soft deleted records every time you do a lookup.
And avoids situations where app/backend logic forgets to apply the “deleted: false” filter.
Strange. I've only ever heard of legal requirements preventing deletion of things you'd expect could be fully deleted (in case they're needed as evidence at trial or something).
The data subject to the regulation had a high potential for abuse. Automated anti-retention limits the risk and potential damage.
If you leave a comment on a forum, and then delete it, it may be marked as soft-deleted so that it doesn't appear publicly in the thread anymore, but admins can still read what you wrote for moderation/auditing purposes.
On the other hand, if you send a privacy deletion request to the forum, they would be required to actually fully delete or anonymize your data, so even admins can no longer tie comments that you wrote back to you.
Most social media sites probably have to implement both of these processes/systems.
Unless there’s a regulatory requirement (which there currently isn’t in any jurisdiction I’ve heard of), that’s a perfectly acceptable response.
You keep 99%, soft delete 1%, use some sort of deleted flag. While I have not tried it whalesalad's suggestion of a view sounds excellent. You delete 99%, keep 1%, move it!
Now instead of chasing down different systems and backups, you can simply set ensure your archival process runs regularly and you should be good.
However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.
The amount of times I have “undeleted” something are few and far between.
> However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.
> The amount of times I have “undeleted” something are few and far between.
Similar take from me. Soft deletes sorta makes sense if you have a very simply schema, but the biggest problem I have is that a soft delete leads to broken-ness - some other table now has a reference to a record in the target table that is not supposed to be visible. IOW, DB referential integrity is out the window because we can now have references to records that should not exist!
My preferred way (for now, anyway) is to copy the record to a new audit table and nuke it in the target table in a single transaction. If the delete fails we can at least log the fact somewhere that some FK somewhere is preventing a deletion.
With soft deletes, all sorts of logic rules and constraints are broken.
But this is such a huge PITA because you constantly have to mind if any given object has this setup or not and what if related objects have different start/end dates? And something like a scheduled raise for next year to $22/hour can get funny if I then try to insert that just for July it will be $24/hour (this would take my single record for next year and split it into two and then you gotta figure out which gets the original ID and which is the new row.
Another alternative to this is a pattern where you store the current state and separately you store mutations. So you have a compensation table and a compensation_mutations table which says how to evolve a specific row in a compensation table and when. The mutations for anything in the future can be deleted but the past ones cannot which lets you reconstruct who did what, when, and why. But this also has drawbacks. One of them is that you can’t query historical data the same way as current data. You also have to somehow apply these mutations (cron job? DB trigger?)
And of course there are database extensions that allow soft deletes but I have never tried them for vague portability reasons (as if anyone ever moved off Postgres).
We have an offline-first infrastructure that replicates the state to possibly offline clients. Hard deletes were causing a lot of fun issues with conflicts, where a client could "resurrect" a deleted object. Or deletion might succeed locally but fail later because somebody added a dependent object. There are ways around that, of course, but why bother?
Soft deletes can be handled just like any regular update. Then we just periodically run a garbage collector to hard-delete objects after some time.