I am on the receiving end: I have to parse CSV generated by various (very expensive, very complicated) eCAD software packages. And it's often garbage. Those expensive software packages trip on things like escaping quotes. There is no way to recover a CSV line that has an unescaped double quote.
I can't point to a strict spec and say "you are doing this wrong", because there is no strict spec.
Then there are the TSV and semicolon-Separated V variants.
Did I mention that field quoting was optional?
And then there are banks, which take this to another level. My bank (mBank), which is known for levels of programmer incompetence never seen before (just try the mobile app) generates CSVs that are supposed to "look" like paper documents. So, the first 10 or so rows will be a "letterhead", with addresses and stuff in various random columns. Then there will be your data, but they will format currency values as prettified strings, for example "34 593,12 USD", instead of producing one column with a number and another with currency.
Free database like Postgres? Perfect every time.
Big complex 6-figure e-discovery system? Apparently written by someone who has never heard of quoting, escaping or the difference between \n and \r and who thinks it's clever to use 0xFF as a delimiter, because in the Windows-1252 code page it looks like a weird rune and therefore "it won't be in the data".
Any time we were allowed to actually test one of the "enterprise" platforms, we'd break it in a few minutes. And I don't mean by being pathologically abusive, I mean stuff like "let's see if it can correctly handle a UTF-8 BOM...oh no, it can't".
It's because about a certain size, system projects are captured by the large consultancy shops, who eat the majority of the price in profit and management overhead...
... and then send the coding work to a lowest-cost someone who has never heard of quoting, etc.
And it's a vicious cycle, because the developers in those shops that do learn and mature quickly leave for better pay and management.
(Yes, there's usually a shit hot tiger team somewhere in these orgs, but they spend all their time bailing out dumpster fires or landing T10 customers. The average customer isn't getting them.)
So I don't buy the consultancy company sentiment, it always boils down to engineers and incentives.
In my experience, smaller ones tend to align incentives better.
Once they grow past a certain size though, it's a labor arbitrage game. Bill client X, staff with resources costing Y (and over-represented), profit = X-Y, minimize Y to maximize profit.
PwC / IBM Global Services wasn't offering the best and brightest. (Outside of aforementioned tiger teams)
And client was very-very happy with the quality, and that we didn't fight for promotions and could maintain very important, but promotion-poor projects. Up to the point that client trusted to completely gave couple of projects fully to my shop. When you don't need to fight for promotions, code quality also improves.
There would have been many better separators... but good idea to bring formatting into it as well...
That of course means that you have to translate these scripts depending on the locale set in your office suite, otherwise they are full of syntax errors...
Many European languages use '.' to end statements, Prolog (France) for example, but use ';' to separate arguments.
To be fair, that's not a problem with CSV but with the provider's lack of data literacy.
Option a worked fine so long as none of the names in the spreadsheet had any non-ASCII characters.
Option d was by some measures the worst (and was definitely the largest file size), but it did seem to consistently work in Excel and Libre Office. In fact, they all worked without any issue in Libre Office.
Have you tried RFC 4180?
-different delimiters (comma, semi-colon, tab, pipe etc)
-different encodings (UTF8, UTF16 etc)
-different line ending (CR, LF, CR+LF)
-ragged rows
-splitting and merging columns
And much more besides.
However, if you have either:
-line feeds and/or carriage returns in data values, but no quoting
or
-quoting, but quotes in data values aren't properly handled
Then you are totally screwed and you have my sympathies!
I use the industry standard that everyone understands: ECMA-376, ISO/IEC 29500 aka .xlsx.
Nobody has any problems producing or ingesting .xlsx files. The only real problem is the confusion between numbers and numeric text that happens when people use excel manually. For machine to machine communication .xlsx has never failed me.
https://learn.microsoft.com/en-us/office/troubleshoot/excel/...
Now you might argue that ECMA-376 accounts for this, because it has a `date1904` flag, which has to be 0 for 1900-based dates and 1 for 1904-based dates. But what does that really accomplish if you can’t be sure that vendors understand subtleties like that if they produce or consume it? Last time I checked (maybe 8 years ago), spreadsheets created on Windows and opened on Mac still shifted dates by four years, and the bug was already over twenty years old at that time.
And the year-1904 issue is just the one example that I happen to know.
I have absolutely zero confidence in anything that has touched, or might have touched, MS Excel with anything short of a ten-foot pole.
* Don't mind me extending 'normal' here to include human-written numbers with thousand seperators.
For binary files, yeah but you don't see CSV used there anyway.
But putting that aside, if the control chars are not text, then you sacrifice human-readability and human-writability. In which case, you may as well just use a binary format.
It isn't good enough to say "but people don't/won't/shouldn't do that", because it will just happen regardless. I've seen nested CSV in real-life data.
Compare to the zero-terminated strings used by C, one legacy of which is that PostgreSQL doesn't quite support UTF-8 properly, because it can't handle a 0 byte in a string, because 0 is "special" in C.
This is a different thing altogether from using "disallowed" control characters, which is an attempt to avoid escaping altogether - an attempt which I was arguing is doomed to fail.
[1]: yes, I know about RFC 4180. But csvs in the wild often don't follow it.
That footprint seems to be dozens of variations to work with to find a library for?
CSV are universal though, text kind of like markdown, and that is my intended main point.
But CSV under controlled circumstances is very simple.
And speaking of Wintendo, the bonus is often that you can go straight from CSV to Excel presentation for the middle management.
Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files, and there are many flavours that are incompatible with each other in the sense that a reader for one flavour would not be suitable for reading the other and vice versa. Quoting, escaping, UTF-8 support are particular problem areas, but also that you cannot tell programmatically whether line 1 contains column header names or already data (you will have to make an educated guess but there ambiguities in it that cannot be resolved by machine).
Having worked extensively with SGML for linguistic corpora, with XML for Web development and recently with JSON I would say programmatically, JSON is the most convenient to use regarding client code, but also its lack of types makes it useful less broadly than SGML, which is rightly used by e.g. airlines for technical documntation and digital humanities researchers to encode/annotate historic documents, for which it is very suitable, but programmatically puts more burden on developers. You can't have it all...
XML is simpler than SGML, has perhaps the broadest scope and good software support stack (mostly FOSS), but it has been abused a lot (nod to Java coders: Eclipse, Apache UIMA), but I guess a format is not responsible for how people use or abuse it. As usual, the best developers know the pros and cons and make good-taste judgments what to use each time, but some people go ideological.
(Waiting for someone to write a love letter to the infamous Windows INI file format...)
jq supports several pseudo-JSON formats that are quite useful like record separator separated JSON, newline separated JSON. These are obviously out of spec, but useful enough that I've used them and sometimes piped them into a .json file for storage.
Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
The industry is so chaotic now we keep giving the same patterns different names, adding to the chaos.
That’s unambiguously allowed by the JSON spec, because it’s just a grammar. The semantics are up to the implementation.
From ECMA-404[1] in section 6:
> The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs.
That IS unambiguous.
And for more justification:
> Meaningful data interchange requires agreement between a producer and consumer on the semantics attached to a particular use of the JSON syntax. What JSON does provide is the syntactic framework to which such semantics can be attached
> JSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of number types of various capacities and complements, fixed or floating, binary or decimal.
> It is expected that other standards will refer to this one, strictly adhering to the JSON syntax, while imposing semantics interpretation and restrictions on various encoding details. Such standards may require specific behaviours. JSON itself specifies no behaviour.
It all makes sense when you understand JSON is just a specification for a grammar, not for behaviours.
[1]: https://ecma-international.org/wp-content/uploads/ECMA-404_2...
I think this is outdated? I believe that the order is preserved when parsing into a JavaScript Object. (Yes, Objects have a well-defined key order. Please don't actually rely on this...)
If I'm not mistaken, this is the primary point:
> Valid JSON text is a subset of the ECMAScript PrimaryExpression syntax. Step 2 verifies that jsonString conforms to that subset, and step 10 asserts that that parsing and evaluation returns a value of an appropriate type.
And in the algorithm
c. Else,
i. Let keys be ? EnumerableOwnProperties(val, KEY).
ii. For each String P of keys, do
1. Let newElement be ? InternalizeJSONProperty(val, P, reviver).
2. If newElement is undefined, then
a. Perform ? val.[[Delete]](P).
3. Else,
a. Perform ? CreateDataProperty(val, P, newElement).
If you theoretically (not practically) parse a JSON file into a normal JS AST then loop over it this way, because JS preserves key order, it seems like this would also wind up preserving key order. And because it would add those keys to the final JS object in that same order, the order would be preserved in the output.> (Yes, Object's have a well-defined key order. Please don't actually rely on this...)
JS added this in 2009 (ES5) because browsers already did it and loads of code depended on it (accidentally or not).
There is theoretically a performance hit to using ordered hashtables. That doesn't seem like such a big deal with hidden classes except that `{a:1, b:2}` is a different inline cache entry than `{b:2, a:1}` which makes it easier to accidentally make your function polymorphic.
In any case, you are paying for it, you might as well use it if (IMO) it makes things easier. For example, `let copy = {...obj, updatedKey: 123}` is relying on the insertion order of `obj` to keep the same hidden class.
I-JSON (short for "Internet JSON") is a restricted profile of JSON designed to maximize interoperability and increase confidence that software can process it successfully with predictable results.
So it's not JSON, but a restricted version of it.
I wonder if use of these restrictions is popular. I had never heard of I-JSON.
This is also an issue, due to the way that order of keys are working in JavaScript, too.
> record separator separated JSON, newline separated JSON.
There is also JSON with no separators, although that will not work very well if any of the top-level values are numbers.
> Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
Yes, as well as non-Unicode text (including (but not limited to) file names on some systems), and (depending on the implementation) 64-bit integers and big integers. Possibly also date/time.
I think DER avoids these problems. You can specify whether or not the order matters, you can store Unicode and non-Unicode text, NaN and Infinity, raw byte arrays, big integers, and date/time. (It avoids some other problems as well, including canonization (DER is already in canonical form) and other issues. Although, I have a variant of DER that avoids some of the excessive date/time types and adds a few additional types, but this does not affect the framing, which can still be parsed in the same way.)
A variant called "Multi-DER" could be made up, which is simply concatenating any number of DER files together. Converting Multi-DER to BER is easy just by adding a constant prefix and suffix. Converting Multi-DER to DER is almost as easy; you will need the length (in bytes) of the Multi-DER file and then add a prefix to specify the length. (In none of these cases does it require parsing or inspecting or modifying the data at all. However, converting the JSON variants into ordinary JSON does require inspecting the data in order to figure out where to add the commas.)
integer = -? (digit | onenine digit+)
https://json.org/ //MAX_SAFE_INTEGER is actually 9007199254740991 which is 16 digits
//you can instead check if exactly 16 and compare size one string digit at a time if absolute precision is desired.
const bigIntReviver = (key, value, context) => typeof value === 'number' && Math.floor(value) === value && context.source.length > 15 ? BigInt(context.source) : value
const jsonWithBigInt = x => JSON.parse(x, bigIntReviver)
Generally, I'd rather throw if a number is unexpectedly too big otherwise you will mess up the types throughout the system (the field may not be monomorphic) and will outright fail if you try to use math functions not available to BigInts.https://caniuse.com/mdn-javascript_builtins_json_parse_reviv...
Absent in Safari though
Naturally not that hard to write a custom JSON parser but the need itself is a bad thing.
https://github.com/zloirock/core-js#jsonparse-source-text-ac...
What you are probably thinking of is the context parameter of the reviver callback. That is relatively recent and mostly a qol improvement
The issue it solves is a big one though, since without it the JSON.parse functionality cannot parse numbers that are larger than 64bit float numbers (f.ex. bigints).
“No one owns CSV. It has no real specification (yes, I know about the controversial ex-post RFC 4180), just a set of rules everyone kinda agrees to respect implicitly. It is, and will forever remain, an open and free collective idea.”
I actually miss that. It was nice when settings were stored right alongside your software, instead of being left behind all over a bloated registry. And the format was elegant, if crude.
I wrote my own library for encoding/writing/reading various datatypes and structure into ini's, in a couple different languages, and it served me well for years.
Really? I think the idea of a central, generic, key-value pair database for all the setting on a system is probably the most elegant reasonable implementation there could be.
The initial implementation of Windows Registry wasn't good. It was overly simplistic and pretty slow. Though the "bloat" (what ever that means) of registry hasn't been an actual issue in over 20 years. The only people invested in convincing you "it's an issue" are CCleaner type software that promise to "speed up your computer" if you just pay $6.99.
How many rows do you need in a sqlite database for it to be "bloated"?
It doesn't feel that way to me: it's neither simple to read nor to write. I suppose that that's a builtin problem due to tree representation, which is something that INI files were never expected to represent.
TBH, I actually prefer the various tree representation workarounds used by INI files: using whitespace to indicate child nodes stops being readable once you have more than a screenful of children in a node.
YAML simple? It's sepc is larger than XML... Parsing of numbers and strings is ambiguous, leading zeros are not strings but octal (implicit conversion...). List as keys? Oh ffs, and you said readable. And do not get me started about "Yes" being a boolean, reminds me of the MS Access localizations which had other decimal values for true and [local variant of true] (1 vs -1).
Writable? Even worse. I think I have never been able to write a YAML file without errors. But that might just be me, XML is fine though while unreadable.
things:
- title: "A thing"
item_id: "deadbeef-feb1-4e8c-b61c-dd9a7a9fffff"
is_active: true
favorite_formats:
- yml
- ini
- title: "Another thing"
item_id: "deadbeef-feb1-3333-4444-dd9a7a9fffff"
is_active: false
favorite_formats:
- mp3
- wav
Just because you can use it to create a monstrosity doesn't prevent it from being useful for simple configuration. Basically, it's just prettier JSON.I've been looking in the specs and I can't find the link to the https://yaml.org/type/
https://github.com/medialab/xan/blob/master/docs/LOVE_LETTER...
Also these people have only ever had to deal with the American Excel localization.
So yeah, with the caveat of "only ever use Excel and only ever the American edition" CSV is pretty nice.
That said, I don't really know of any alternative that won't be handled even worse by my colleagues...
> Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files,
There is, actually, RFC 4180 IIRC.
> there are many flavours that are incompatible with each other in the sense that a reader for one flavour would not be suitable for reading the other and vice versa.
"There are many flavours that deviate from the spec" is a JSON problem too.
> you cannot tell programmatically whether line 1 contains column header names or already data (you will have to make an educated guess but there ambiguities in it that cannot be resolved by machine).
Also a problem in JSON
> Quoting, escaping, UTF-8 support are particular problem areas,
Sure, but they are no more nor no less a problem in JSON as well.
There aren't vast numbers of different JSON formats. There's practically one and realistically maybe two.
Headers are in each line, utf8 has never been an issue for me and quoting and escaping are well defined and obeyed.
This is because for datasets, almost exclusively, the file is machine written and rarely messed with.
Csv files have all kinds of separators, quote characters, some parsers don't accept multi lines and some do, people sort files which mostly works until there's a multi line. All kinds of line endings, encodings and mixed encodings where people have combined files.
I tried using ASCII record separators after dealing with so many issues with commas, semicolons, pipes, tabs etc and still data in the wild had these jammed into random fields.
Lots of these things don't break when you hit the issue either, the parsers happily churn on with garbage data, leading to further broken datasets.
Also they're broken for clients if the first character is a capital I.
> Headers are in each line
This might be my old “space and network cost savings” reflex, which is a lot less necessary these days, kicking in, but the feels inefficient. It also gives rise to not knowing the whole schema until you read the whole dataset (which might be multiple files), unless some form of external schema definition is provided.
Having said that, I accept that JSON has advantages over CSV, even if all that is done is translating a data-table into an array of objects representing one row each.
> utf8 has never been an issue for me
The main problem with UTF8 isn't with CSV generally, it is usually, much like the “first column is called ID” issue, due to Excel. Unfortunately a lot of people interact with CSVs primarily with Excel, so it gets tarred with that brush by association. Unless Excel sees the BOM sequence at the start of a CSV file, which the Unicode standards recommend against for UTF8, it assumes its characters are using the Win1252 encoding (almost, but not quite, ISO-8859-1).
> Csv files have all kinds of separators
I've taken to calling them Character Separated Value files, rather than Comma, for this reason.
JSONL is handy, JSON that's in the form {data: [...hundred megs of lines]} is annoying for various parsers.
I'm quite a fan of parquet, but never expect to receive that from a client (alas).
One reason this became common was a simple protection against json hijacking: https://haacked.com/archive/2009/06/25/json-hijacking.aspx/
It's what I'd pick for tabular data exchange.
A recent problem I solved with it and duckdb allowed me to query and share a 3M record dataset. The size? 50M. And my queries all ran subsecond. You just aren't going to get that sort of compression and query-ability with a csv.
I've dealt with incorrect CSVs numerous times, never with incorrect JSON, but, of the times I know what was happening on the other system, each time the CSV was from some in house (or similar) implementation of dumping a SQL output (or similar) into a text file as an MVP. JSON was always using some library.
If so, that's all the more reason to love CSV as it stands guard for JSON. If CSV didn't exist, we would instead have broken JSON implementations. (JSON and XML would likely then share a similar relationship.)
One project I worked on involved a vendor promising to send us data dumps in "CSV format". When we finally received their "CSV" we had to figure out how to deal with (a) global fields being defined in special rows above the header row, and (b) a two-level hierarchy of semicolon-delimited values nested within comma-delimited columns. We had to write a custom parser to complete the import.
Yes, we chose ARFF format, which is idiosyncratic yet well-defined back in the old data mining days.
I mean, right now, the data interchange format between multiple working systems is CSV; think payment systems, inter-bank data interchange, ERP systems, CRM systems, billing systems ... the list goes on.
I just recently had a coffee with a buddy who's a salesman for some enterprise system: of the most common enterprise systems we recently worked with (SAP type things, but on smaller scales), every single one of them had CSV as the standard way to get data between themselves and other systems.
And yet, they work.
The number of people uploading excel files to be processed or downloading excel files for local visualistation and processing would floor you. It's done multiple times a day, on multiple systems, in multiple companies.
And yet, they work.
I get your argument though - a JSON array of arrays can represent everything that CSV can, and is preferable to CSV, and is what I would choose when given the choice, but the issues with using that are not going to be fewer than issues with CSV using RFC1480.
That is not my experience at all. I've been processing CSV files from financial institutions for many years. The likelihood of brokenness must be around 40%. It's unbelievable.
The main reason for this is not necessarily the CSV format as such. I believe the reason is that it is often the least experienced developers who are tasked with writing export code. And many inexperienced developers seem to think that they can generate CSV without using a library because the format is supposedly so simple.
JSON is better but it doesn't help with things like getting dates right. XML can help with that but it has complexities that people get wrong all the time (such as entities), so I think JSON is the best compromise.
Can't they?
def excel_csv_of(rows):
for row in rows:
for i, field in enumerate(row):
if i:
yield ','
yield '"'
for c in field:
yield '""' if c == '"' else c
yield '"'
yield '\n'
I haven't tested this, even to see if the code parses. What did I screw up?If my experience reflects a relevant sample then the answer is that most can but a very significant minority fails at the job (under the given working conditions).
Whether or not _you_ can is a separate question. I don't see anything wrong with your code. It does of course assume that whatever is contained in rows is correct. It also assumes that the result is correctly written to a file without making any encoding mistakes or forgetting to flush the stream.
Not using name value pairs makes CSV more prone to mistakes such as incorrect ordering or number of values in some rows, a header row that doesn't correspond with the data rows, etc. Some export files are merged from multiple sources or go through many iterations over many years, which makes such mistakes far more likely.
I have also seen files that end abruptly somewhere in the middle. This isn't specific to CSV but it is specific to not using libraries and not using libraries appears to be more prevalent when people generate CSV.
You'd be surprised how many CSV files are out there where the developer tried to guess incorrectly whether or not a column would ever have to be escaped. Maybe they were right initially and it didn't have to be escaped but then years later something causes a change in number formats (internationalisation) and bang, silent data corruption.
Prioritising correctness and robustness over efficiency as you have done is the best choice in most situations. Using a well tested library is another option to get the same result.
Most people expect something like `12,,213,3` instead of `"12","213","3"` which yours might give.
https://en.wikipedia.org/wiki/Comma-separated_values#Basic_r...
With respect to "the same fields in the same order", no, although you may or may not feed the CSV to an application that has such an expectation. But if you apply it to data like [("Points",),(),("x","y"),("3","4"),("6","8","10")] it will successfully preserve that wonky structure in a file Excel can ingest reliably. (As reliably as Excel can ingest anything, anyway, since Excel has its own Norway problem.)
It's true that it's possible to produce more optimized output, but I didn't claim that the output was optimal, just correct.
Using generators is necessary to be able to correctly output individual fields that are many times larger than physical memory.
It's massively used, but the lack of adherence to a proper spec causes huge issues. If you have two systems that happen to talk properly to each other, great, but if you are as I was an entrypoint for all kinds of user generated files it's a nightmare.
CSV is the standard, sure, but it's easy to write code that produces it that looks right at first glance but breaks with some edge case. Or someone has just chosen a different separator, or quote, so you need to try and detect those before parsing (I had a list that I'd go through, then look for the most commonly appearing non-letter character).
The big problem is that the resulting semantically broken csv files often look pretty OK to someone scanning them and permissive parsers. So one system reads it in, splits something on lines and assumes missing columns are blank and suddenly you have the wrong number of rows, then it exports it. Worse if it's been sorted before the export.
Of course then there's also the issues around a lack of types, so numbers and strings are not distinguishable automatically leading to broken issues where you do want leading zeros. Again often not identified until later. Or auto type detection in a system breaking because it sees a lot of number-like things and assumes it's a number column. Without types there's no verification either.
So even properly formatted CSV files need a second place for metadata about what types there are in the file.
JSON has some of these problems too, it lacks dates, but far fewer.
> but the issues with using that are not going to be fewer than issues with CSV using RFC1480.
My only disagreement here is that I've had to deal with many ingest endpoints that don't properly support that.
Fundamentally I think nobody uses CSV files because they're a good format. They've big, slow to parse, lack proper typing, lack columnar reading, lack fast jumping to a particular place, etc.
They are ubiquitous, just not good, and they're very easy to screw up in hard to identify or fix ways.
Finally, lots of this comes up because RFC4180 is only from *2005*.
Oh, and if I'm reading the spec correctly, RFC4180 doesn't support UTF8. There was a proposed update maybe in 2022 but I can't see it being accepted as an RFC.
And there are constant issues arising from that. You basically need a small team to deal with them in every institution that is processing them.
> I just recently had a coffee with a buddy who's a salesman for some enterprise system: of the most common enterprise systems we recently worked with (SAP type things, but on smaller scales), every single one of them had CSV as the standard way to get data between themselves and other systems.
Salesman of enterprise system do not care about issues programmers and clients have. They care about what they can sell to other businessmen. That teams on both sides then waste time and money on troubleshooting is no concern to the salesman. And I am saying that as someone who worked on the enterprise system that consumed a lot of csv. It does not work and process of handling them literally sometimes involved phone calls to admins of other systems. More often then would be sane.
> The number of people uploading excel files to be processed or downloading excel files for local visualistation and processing would floor you.
That is perfectly fine as long as it is a manager downloading data so that he can manually analyze them. It is pretty horrible when those files are then uploaded to other systems.
SAP has been by far the worst. I never managed to get data out of it that were not completely garbage and needed hand crafted parsers.
Through a lot of often-painful manual intervention. I've seen it first-hand.
If an organization really needs something to work, it's going to work somehow—or the organization wouldn't be around any more—but that is a low bar.
In a past role, I switched some internal systems from using CSV/TSV to using Parquet and the difference was amazing both in performance and stability. But hey, the CSV version worked too! It just wasted a ton of people's time and attention. The Parquet version was far better operationally, even given the fact that you had to use parquet-tools instead of just opening files in a text editor.
Independent variations I have seen:
* Trailing commas allowed or not * Comments allowed or not * Multiple kinds of date serialization conventions * Divergent conventions about distinguishing floating point types from integers * Duplicated key names tolerated or not * Different string escaping policies, such as, but not limited to "\n" vs "\x0a"
There are bazillions of JSON variations.
The json spec does not allow commas. Although there are jsom supersets that do.
> Comments allowed or not
The json spec does not allow comments. Although there are jsom supersets that do.
> Multiple kinds of date serialization conventions
Json spec doesn't say anything about dates. That is dependent on your application schema.
> Divergent conventions about distinguishing floating point types from integers
This is largely due to divergent ways different programming languages handle numbers. I won't say jsom handles this the best, but any file format used across multiple languages will run into problems with differences in how numbers are represented. At least there is a well defined difference between a number and a string, unlike csv.
> Duplicated key names tolerated or not
According to the spec, they are tolerated, although the semantics of such keys is implementation defined.
> Different string escaping policies, such as, but not limited to "\n" vs "\x0a"
Both of those are interpreted as the same thing, at least according to the spec. That is an implementation detail of the serializer, not a different language.
RFC 4180 is not an official standard. The text of the RFC itself states:
> This memo provides information for the Internet community. It does > not specify an Internet standard of any kind.
CSVs existed long before that RFC was written, and it is more a description of CSVs that are somewhat portable, not a definitive specification.
It is, and accepts it is, codifing best practices rather than defining an authoritative standard.
Typically the big difference is there are different parsers that are less tolerant of in spec values. Clickhouse had a more restrictive parser, and recently I've dealt with matrix.
Maybe I've been lucky for json and unlucky for csv.
Basically, Excel uses the equivalent of ‘file’ (https://man7.org/linux/man-pages/man1/file.1.html), sees the magic “ID”, and decides a SYLK file, even though .csv files starting with “ID” have outnumbered .SYLK files by millions for decades.
As there is no standard to which Excel conforms as it predates standards and there would be an outcry if Excel started rejecting files that had worked for years.
Does any software fully follow that spec (https://www.rfc-editor.org/rfc/rfc4180)? Some requirements that I doubt are commonly followed:
- “Each record is located on a separate line, delimited by a line break (CRLF)” ⇒ editing .csv files using your the typical Unix text editor is complicated.
- “Spaces are considered part of a field and should not be ignored”
- “Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes” ⇒ fields containing lone carriage returns or new lines need not be enclosed in double quotes.
There is such a document: RFC 4180. It may not be a good document, but it does exist.
Haven’t been a full time Python dev in sometime though, it seems TOML has supplanted that, but I remember thinking how interesting it was that Python had a built in INI parser and serializer
When a JSON API turns out to have optional fields it usually shows through trial and error, and unlike CSV it's typically not considered a bug you can expect the API owner to fix. In CSV 'missing data' is an empty string rather than nulls or their cousins because missing fields aren't allowed, which is nice.
I also like that I can write my own ad hoc CSV encoder in most programming languages that can do string concatenation, and probably also a suitable decoder. It helps a lot in some ETL tasks and debugging. Decent CSV also maps straight to RDBMS tables, if the database for some reason fails at immediate import (e.g. too strict expectations) into a newly created table it's almost trivial to write an importer that does it.
Despite JSON may also be interpreted differently by different tools, it is a good default choice for communicating between programs
Well, that sure beats OpenAPI lack of compatibility between stacks in the same programming language.
I think the fact one can't randomly concatenate strings and call it "valid XML" a huge bonus over the very common "join strings with comma and \r\n", non-rfc4180 compliant (therefore mostly unparseable without human/LLM interaction) garbage people often pretend is CSV.
You would think that ie XML-defined WSDL with XSD schema is well battle proven. I've encountered 2 years ago (and still dealing with that) WSDL from a major banking vendor that is technically valid, but no open source library in Java (from all languages) was able to parse it successfully or generate binding classes out of box.
Heck, flat files can end up with extreme cases, just work enough with legacy banking or regulatory systems and you will see some proper shit.
The thing is, any sort of critical integration needs to be battle tested and continuously maintained, otherwise it will eventually go bad, even a decade after implementation and regular use without issues.
Honestly, it’s fine. TOML is better if you can use it, but otherwise for simple applications, it’s fine. PgBouncer still uses INI, though that in particular makes me twitch a bit, due to discovering that if it fails to parse its config, it logs the failed line (reasonable), which can include passwords if it’s a DSN string.
1. It's tricky to parallelise processing of CSV. 2. A small amount of data corruption can have a big impact on the readability of a file (one missing or extra quote can bugger the whole thing up).
So these days for serialisation of simple tabular data I prefer plain escaping, e.g. comma, newline and \ are all \-escaped. It's as easy to serialise and deserialise as CSV but without the above drawbacks.
If you want CSV-ish, enforce an array of strings for each record. Or go further with actual objects and non-string types.
You can even jump to an arbitrary point and then seek till you see an actual new line as it’s always a record boundary.
It’s not that CSV is an invalid format. It’s that libraries and tools to parse CSV tend to suck. Whereas JSON is the lingua franca of data.
This isn't the case. An incredible amount of effort and ingenuity has gone into CSV parsing because of its ubiquity. Despite the lack of any sort of specification, it's easily the most widely supported data format in existence in terms of tools and language support.
Yea and it's still a partially-parseable shit show with guessed values. But we can and could have and should have done better by simply defining a format to use.
So, that means that Excel in those locales uses semicolons as separators rather than the more-frequently-used-in-data commas. Probably not the decision I'd make in retrospect, but not completely stupid.
Microsoft Excel -> application/vnd.ms-excel
CSV is a text format, xls[x], json, and (mostly) xml are not.
printf "alice\007london\007uk\nbob\007paris\007france\n" > data.bsv
I'm hoping no reasonable person would ever use BEL as punctuation or decimal separator.By the way, RS is decimal 30 (not octal '\030'). In octal, RS is '\036'. For example:
$ printf '\036' | xxd -p
1e
$ printf '\x1e' | xxd -p
1e
See also https://en.cppreference.com/w/cpp/language/ascii for confirmation.0x1C - File Separator
0x1D - Group Separator
0x1E - Record Separator
0x1F - Unit Separator
So I guess 1F would be the "comma" and 1E would be the "newline."
I am pretty sure you shifted the meaning, the decimal separator is part of the atomic data it does not need a control character.
You would use 1F instead of the comma/semicolon/tab and 1E to split lines (record means line just like in SQL).
You could then use 1D to store multiple CSV tables in a single file.
This would confuse most users of csvs they are not programmers they at most use text editors and Excel.
ding! ding! ding! winner winner, chicken dinner!
Although BEL would drive me up the wall if I broke out any of my old TTY hardware.
Oh wait.
lol
I went looking at some of the more niche languages like Prolog, COBOL, RPG, APL, Eiffel, Maple, MATLAB, tcl, and a few others. All of these and more had JSON libraries (most had one baked into the standard library).
The exceptions I found (though I didn't look too far) were: Bash (use jq with it), J (an APL variant), Scratch (not exposed to users, but scratch code itself is encoded in JSON), and Forth (I could find implementations, but it's very hard to pin down forth dialects).
I do not miss dealing with csv files in the slightest.
You get this backwards. Tabular structured data to store are ubiquitous. Text as a file format is also ubiquitous because it is accessible. The only actual decisions are about whether to encode your variables as rows or columns, what is the delimiter, and other rules such as escaping etc. Vars as columns makes sense because it makes appending easier. There is a bunch of stuff that can be used for delimeters, commas being the most common, none is perfect. But from this point onwards, decisions do not really matter, and "CSV" basically covers everything from now on. "CSV" is basically what comes naturally when you have tabular datasets and want to store them in text. CSV tooling is developed because there is a need for this way of formatting data. Whether CSV is "good" or "ugly" or whatever is irrelevant, handling data is complicated as much as the world itself is. The alternatives are either not structuring/storing the data in a tabular manner, or non-text (eg binary) formats. These alternative exist and are useful in their own right, but don't solve the same problems.
There are at least 3 knobs to turn every time you want to parse a CSV file. There’s reasonably good tooling around this (for example, Python’s CSV module has 8 parser parameters that let you select stuff), but the fact that you have to worry about these details is itself a problem.
You said “handling data is complicated as much as the world itself is”, and I 100% agree. But the really hard part is understanding what the data means, what it describes. Every second spent on figuring out which CSV parsing option I have to change could be better spent actually thinking about the data.
Having header or not should be specified up front and one should not parse some unknown file because that will always end up with failure.
If you have your own serialization and your own parsing working yeah this will simply work.
But then not pushing back to the user some errors and trying to deal with everything is going to be frustrating because amount of edge cases is almost infinite.
Handling random data is hard, saying it is a CSV and trying to support everything that comes with it is hard.
Apple macOS has had to invest enormous amounts...
Pick your distro of Linux has had to invest enormous amounts...
None of them a perfect and any number of valid complaints can be said about any of them. None of the complaints make any of the things useless. Everyone has workarounds.
Hell, JSON has had to invest enormous amounts of effort...
Want to do the same with csv? Good luck. Delimiter? Configurable. Encoding? Configurable. Misplaced comma? No parse in JSON, in csv: might still parse, but is now semantically incorrect and you possibly won’t know until it’s too late, depending on your parser. The list goes on.
The table of contents points to a single Json object that is 20ish gb compressed
https://www.anthem.com/machine-readable-file/search/
All stock libs will fail
Scientists rename genes because Microsoft Excel reads them as dates (2020)
https://www.reddit.com/r/programming/comments/i57czq/scienti...
Before you dismiss it as 'not a language, people have argued that it is. And you can definitely program stuff in it, and so that surely makes it a language
People keep saying this but RFC 4180 exists.
Even better, the majority of the time I write/read CSV these days I don't need to use a library or tools at all. It'd be overkill. CSV libraries are best saved for when you're dealing with random CSV files (especially from multiple sources) since the library will handle the minor differences/issues that can pop up in the wild.
It's just that people tend to use specialized tools for encoding and decoding it instead of like ",".join(row) and row.split(",")
I have seen people try to build up JSON strings like that too, and then you have all the same problems.
So there is no problem with CSV except that maybe it's too deceptively simple. We also see people trying to build things like URLs and query strings without using a proper library.
If that sounds like a lot of edge-case work keep in mind that people have been doing this for more than half a century. Lots of examples and notes you can steal.
Do you have any examples of Python, Java, or any of the other Tiobe top 40 languages breaking the JSON spec in their standard library?
In contrast, for the few of those that have CSV libraries, how many of those libraries will simply fail to parse a large number of the .csv variations out there?
Sometimes commas to delimiter, sometimes semicolons, floating point values might have dots or commas to separate fraction digits.
Not to mention text encodings, Ascii, western european character sets, or maybe utf-8 or whatever...
It's a bloody mess.
EDIT: The readme's results are from 2016, but there's more recent results (last updated 5 years ago). Of the 54 parsers /versions tested, 7 gave always the expected result per the spec (disregarding cases where the spec does not define a result).
There's some interesting examples of ambiguities here: https://seriot.ch/projects/parsing_json.html
You really super can't just split on commas for csv. You need to handle the string encodings since records can have commas occur in a string, and you need to handle quoting since you need to know when a string ends and that string may have internal quote characters. For either format unless you know your data super well you need to use a library.
Couple of standards that I know of that does this, primarily intended for logging:
Really easy to work with in my experience.
Sure some space is usually wasted on keys but compression takes care of that.
[“foo”,”bar”,123]
That’s as tabular as CSV but you now have optional types. You can even have lists of lists. Lists of objects. Lists of lists of objects… ["id", "species", "nickname"]
[1, "Chicken", "Chunky cheesecakes"]
[2, "Dog", "Wagging wonders"]
[3, "Bunny", "Hopping heroes"]
[4, "Bat", "Soaring shadows"]
With JSON those new lines are \n characters which are much easier to work with.
CSV seemed so simple but after numerous issues, a cell with both newline and " made me realize I should keep the little hair I had left and put in the work to parse the XML.
It's not great either, with all its weird tags, but at least it's possible to parse reliably.
Just like csv you don't actually need the header row either, as long as there's convention about field ordering. Similar to proto bufs, where the field names are not included in the file itself.
[
[“header1”,”header2”],
[“1.1”, “”],
[7.4, “2022-01-04”]
]
I see an array of arrays. The first and second arrays have two strings each, the last one has a float and a string. All those types are concrete.
Let's say those "1.1" and 7.4 values are supposed to be version strings. If your code is only sometimes putting quotes around the version string, the bug is in your code. You're outputting a float sometimes, but a string in others. Fix your shit. It's not your serialization format that's the problem.
If you have "7.4" as a string, and your serialization library is saying "Huh, that looks like a float, I'm going to make it a float", then get a new library, because it has a bug.
JSON is lists of lists of any length and groups of key/value pairs (basically lisp S-expressions with lots of unnecessary syntax). This makes it a superset of CSV's capabilities.
JSON fundamentally IS made to represent tabular data, but it's made to represent key-value groups too.
Why make it able to represent tabular data if that's not an intended use?
The "top-level" structure of JSON is usually an object, but it can be a list.
> JSON fundamentally IS made to represent tabular data
No, it's really not. It's made to represent objects consisting of a few primitive types and exactly two aggregate types: lists and objects. It's a textual representation of the JavaScript data model and even has "Object" in the name.
> Why make it able to represent tabular data if that's not an intended use?
It's mostly a question of specialization and ergonomics, which was my original point. You can represent tabular data using JSON (as you can in JavaScript), but it was not made for it. Anything that can represent """data""" and at least 2 nesting levels of arbitrary-length sequences can represent tabular data, which is basically every data format ever regardless of how awkward actually working with it may be.
In the same way, there are hierarchically structured datasets that can be represented by both json in hierarchical form and csv in tabular form by repeating certain variables, but if using csv would require repeating them too many times, it would be a bad idea to choose that instead of json. The fact that you can do sth does not always make it a good idea to do it. The question imo is about which way would be more natural, easy or efficient.
The reverse is true as well: being more specialized is a description of goals, not advantages.
The big advantage of JSON is that it's standardized and you can reuse the JSON infrastructure for more than just tabular data.
I'd definitely put that in my list of falsehoods programmers believe about CSV files.
> This header will contain names corresponding to the fields in the file and should contain the same number of fields as the records in the rest of the file
But of course, CSV is the wild west and there's no guarantee that any two encoders will do the same thing (sometimes, there's not even a guarantee that the same encoder will do the same thing with two different inputs).
Headers should have as many rows as possible that contain data items for their column and data items in a row should have a header for the respective columns, but real CSV files should be assumed to have incomplete or variable length lines.
It could use backslash escapes to denote control characters and Unicode points.
Everyone would agree exactly on what the format is, in contrast to the zoo of CSV variants.
It wouldn't have pitfalls in it, like spaces that defeat quotes
RFC CSV JSON strings
a,"b c" "a", "b c"
a, "b c" "a", " \" b c\""
oops; add an innocuous-looking space, and the quotes are now literal.When these syntaxes are parsed into objects, either the type info has to be retaind, or some kind of attribute tag, so they can be output back to the same form.
> make it so any consumer can parse it by splitting on newline and then ...
There is something like that called JSON-lines. It has a .org domain 'n' everything:
Remember that this does not allow arbitrary representation of serialized JSON data. But it allows for any and all JSON data as you can always roundtrip valid JSON to a compact one line representation without extra whitespace.
That is[0] if a string s is a valid JSON then there is no substring s[0..i] for i < n that is a valid json.
So you could just consume as many bytes you need to produce a json and then start a new one when that one is complete. To handle malformed data you just need to throw out the partial data on syntax error and start from the following byte (and likely throw away data a few more times if the error was in the middle of a document)
That is [][]""[][]""[] is unambiguos to parse[1]
[0] again assuming that we restrict ourselves to string, null, boolean, array and objects at the root
[1] still this is not a good format as a single missing " can destroy the entire document.
utf-8 is also similarly self-correcting and so is html and many media formats.
My point was that in my made-up concatenated json format
[]"""[][][][][][][][][][][]"""[]
and
[]""[][][][][][][][][][][]""[]
are both valid but have differ only for 2 bytes but have entirely different structures.
Also it is a made-up format nobody uses (if somebody were to want this they would likely disallow strings at the root level).
How do you do this simply? you read each line, and if there's an uneven number of ", then you have an incomplete record and you will keep all lines until there is an odd number of ". after having the string, parsing the fields correctly is harder but you can do it in regex or PEGs or a disgusting state machine.
comment: https://news.ycombinator.com/item?id=26305052
comment: https://news.ycombinator.com/item?id=39679662
"ASCII Delimited Text – Not CSV or Tab Delimited Text" post [2014]: https://news.ycombinator.com/item?id=7474600
same post [2024]: https://news.ycombinator.com/item?id=42100499
comment: https://news.ycombinator.com/item?id=15440801
(...and many more.) "This comes up every single time someone mentions CSV. Without fail." - top reply from burntsushi in that last link, and it remains as true today as in 2017 :D
You're not wrong though, we just need some major text editor to get the ball rolling and start making some attempts to understand these characters, and the rest will follow suit. We're kinda stuck at a local optimum which is clearly not ideal but also not troublesome enough to easily drum up wide support for ADSV (ASCII Delimiter Separated Values).
Many text editors offer extensions APIs, including Vim, Emacs, Notepad++. But the ideal behavior would be to auto-align record separators and treat unit separators as a special kind of newline. That would allow the file to actually look like a table within the text editor. Input record separator as shift+space and unit separator as shift+enter.
1. the field separator to be shown as a special character
2. the row separator to be (optionally) be interpreted as a linefeed
IIRC 1) is true for Notepad++, but not 2).
The characters you mention could be used in a custom delimiter variant of the format, but at that point it's back to a binary machine format.
This, of course, assumes that your input doesn’t include tabs or newlines… because then you’re still stuck with the same problem, just with a different delimiter.
Except the usages of that character will be rare and so potentially way more scary. At least with quotes and commas, the breakages are everywhere so you confront them sooner rather than later.
Perhaps a more salient example might be CSV nested in CSV. This happens all the time with XML (hello junit) and even JSON— when you plug a USB drive into my LG TV, it creates a metadata file on it that contains {"INFO":"{ \"thing\": true, <etc> }"}
/s in case :)
You need 4 new keyboard shortcuts. Use ctrl+, ctrl+. ctrl+[ ctrl+] You need 4 new character symbols. You need a bit of new formatting rules. Pretty much page breaks decorated with the new symbols. It's really not that hard.
But, like many problems in tech, the popular advice is "Everyone recognizes the problem and the solution. But, the problematic way is already widely used and the solution is not. Therefore everyone doing anything new should invest in continuing to support the problem forever."
There's also the argument of "Now you have two byte values that cannot be allowed to appear in a record under any circumstances. (E.g., incoming data from uncontrolled sources MUST be sanitized to reject or replace those bytes.)" Unless you add an escaping mechanism, in which case the argument shifts to "Why switch from CSV/TSV if the alternative still needs an escaping mechanism?"
This means that records must not contain either of those two bytes, or else the format of the table will be corrupted. And unless you're producing the data yourself, this means you have to sanitize the data before adding it, and have a policy for how to respond to invalid data. But maintaining a proper sanitization layer has historically been finicky: just look at all the XSS vulnerabilities out there.
If you're creating a binary format, you can easily design it to hold arbitrary data without escaping. But just taking a text format and swapping out the delimiters does not achieve this goal.
If you do need these values in your data, then don't use them as delimiters.
Something the industry has stopped doing, but maybe should do again, is restricting characters that can appear in data. "The first name must not contain a record separator" is a quite reasonable restriction. Even Elon Musk's next kid won't be able to violate that restriction.
Having keyboard shortcuts doesn't necessarily solve why people don't want to use that format, either.
That is not specific to EDIT.COM; they are the PC characters with the same codes as the corresponding control characters, so they appear as graphic characters. (They can be used in any program that can use PC character set.)
However, in EDIT.COM and QBASIC you can also prefix a control character with CTRL+P in order to enter it directly into the file (and they appear as graphic characters, since I think the only control characters they will handle as control characters are tabs and line breaks).
Suits are PC characters 3 to 6; these are PC characters 28 to 31 which are other shapes.
> But, like many problems in tech, the popular advice is "Everyone recognizes the problem and the solution. But, the problematic way is already widely used and the solution is not
This is unfortunately common. However, what else happens too, is disagreement about what is the problem and the solution.
Path dependence is a thing. Things that experience network effects don't get changed unless the alternative is far superior, and ASCII Delimited Text is not that superior.
Ignoring that and pushing for it anyway will at most achieve an xkcd 927.
But when looking for a picture to back up my (likely flawed) memory, Google helpfully told me that you can get a record separator character by hitting Ctrl-^ (caret). Who knew?
...There would be datasets that include those characters, and so they wouldn't be as useful for record separators. Look into your heart and know it to be true.
It requires bespoke tools for edition, and while CSV is absolute garbage it can be ingested and produced by most spreadsheet software, as well as databases.
CSV has the same failure mode. As does HTML. (But not XML)
And fully fair that you can have similar issues in other formats. I think the complaint here was that it was a bit harder, specifically because it did not trip up any of the loading code. With a big lesson learned that configs should probably either go pascal string style, where they have an expected number of items as the first part of the data, or xml style, where they have a closing tag.
Really, it is always amusing to find how many of the annoying parts of XML turned out to be somewhat more well thought out than people want to admit.
Still, fair point. And is part of why I said it is a flaw, not the flaw. Plenty of other reasons to not like YAML, to me. :D
Still, a fair point.
It seems many possible designs would've avoided the quoting chaos and made parsing sort of trivial.
In some alternate worked where this "binary" format caught on it would be a very minor issue that it isn't human readable because everyone has a tool that is better at reading it than humans are. (See the above mentioned non-local property of quotes where you may think you are reading rows but are actually inside a single cell.)
Makes me also wonder if something like CBOR caught on early enough we would just be used to using something like `jq` to read it.
If you say those are contrived examples that don't matter any more then you have missed the point and will probably never acknowledge the point and there is no purpose in continuing to try to communicate.
One can only ever remember and type out just so many examples, and one can always contrive some response to any single or finite number of examples, but they are actually infinite, open-ended.
Having a least common denominator that is extremely low that works in all the infinite situations you never even thought of, vs just pretty low and pretty easy to meet in most common situations, is all the difference in the world.
I have in the past does data extractions from systems which really can't serialise properly, where the only option is to concat all the fields with some "unlikely" string like @#~!$ as a separator, then pick it apart later. Ugh.
It's not doing just this, you pick something that's likely not in the data, and then escape things properly. When writing strings you can write a double quote within double quotes with \", and if you mean to type the designated escape character you just write it twice, \\.
The only reason you go for something likely not in the data is to keep things short and readable, but it's not impossible to deal with.
But some people (plenty in this thread) really do think "pick a delimiter that won't be in the data" - and then forget quoting and/or escaping - is a viable solution.
Even if you wanted them, we use backslashes to escape strings in most common programming languages just fine, the problem CSV is that commas aren't easy to recognize because they might be within a single or double quote string, or might just be a separator.
Can strings in CSV have newlines? I bet parsers disagree since there's no spec really.
For automated serialization, plain text formats won out, because they are easy to implement a minimal working solution (both import and export) and more importantly, almost all systems agree on what plain text is.
We don't really have Apple formatted text, that will show up as binary on windows. Especially if you are just transferring id's and numbers, those will fall within ascii and that will work even if you are expecting unicode.
I did always use TSV and I think the original use of CSV could have used that.
But TSV would still have many issues.
Based on that experience I have come to one key, but maybe, counter-intuitive truth about interchange formats:
- Too much freedom is bad.
Why? Generating interchange data is cheaper than consuming it, because the creator only needs to consider the stuff they want to include, whereas the consumer needs to consider every single possible edge case and or scenario the format itself can support.
This is why XML is WAY more costly to ingest than CSV, because in XML someone is going to use: attributes, CDATA, namespaces, comments, different declaration, includes, et al. In CVS they're going to use rows, a format separator, and quotes (with or without escaping). That's it. That's all it supports.
Sqlite as an interchange format is a HORRIFYING suggestion, because every single feature Sqlite supports may need to be supported by consumers. Even if you curtailed Sqlite's vast feature set, you've still created something vastly more expensive to consume than XML, which itself is obnoxious.
My favorite interchange formats are, in order:
- CVS, JSON (inc. NDJSON), YAML, XML, BSON (due to type system), MessagePack, Protobuf, [Giant Gap] Sqlite, Excel (xlsx, et al)
More features mean more cost, more edge cases, more failures, more complex consumers. Keep in mind, this is ONLY about interchange formats between two parties, I have wildly different opinions about what I would use for my own application where I am only ever the creator/consumer, I actually love Sqlite for THAT.
What about Parquet and the like?
So I might do things like have every step in a pipeline begin development by reading from and writing to CSV. This helps with parallel dev work and debugging, and is easy to load into any intermediate format.
> do you use some preferred intermediate format?
This is usually dictated by speed vs money calculations, weird context issues, and familiarity. I think it's useful to look at both "why isn't this a file" and "why isn't this all in memory" perspectives.
Orc also looks good, but isn't well supported. I think parquet is optimal for now for most analytical use-cases that don't require human readability.
What you say is fair. Csv is underspecified though, there's no company called csv that's gonna sue for trademark enforcement, there's no official csv standard library that everyone uses. (They exist are some but there are so many naive importations because from first principles, because how hard could it be? output records and use a comma and newline (of which there are three possible options)).
How often do you deal with multiple Csv files to represent multiple tables that are actually what's used by vendors internally, vs one giant flattened Csv with hundreds of columns and lots of empty cells? I don't have your level of experience with csvs, but I've dealt with a them being a mess, where the other side implement whatever they think is reasonable given the name "comma separated values".
With sqlite, we're in the Internet age and so I presume this hypothetical developer would use the sqlite library and not implement their own library from scratch for funsies. This then leads to types, database normalization, multiple tables. I hear you that too many choices can bad, and xml is a great example of this, but sqlite isn't xml and isn't Csv.
It's hard to have this discussion in the abstract so I'll be forthcoming about where I'm coming from, which is Csv import export between vendors for stores, think like Doordash to UberEATS. the biggest problem we have is images of the items, and how to deal with that. It's an ongoing issue how to get them, but the failure mode, which does happen, is that when moving vendor, they just have to redo a lot of work that they shouldn't have to.
Ultimately the North Star I want to push towards is moving beyond csvs, because it'll let a people who currently have to hand edit the Csv so every row imports properly, not have to do that. They'd still exist, but instead have to deal with, well, what you see with XML files. which has its shortcomings, as you mention, but at least once how a vendor is using it is understood, individual records are generally understandable.
I was moved so I don't deal with import export currently, but it's because sqlite is so nice to work with on personal projects where it's appropriate that I want to push the notion of moving to sqlite over csvs.
I use Sqlite for a static site! Generating those static pages out to individual pages would involve millions of individual files. So instead I serve up a sqlite database over http, and use a sqlite wasm driver [0] to load (database) pages as needed. Good indexing cuts down on the number of pages it grabs, and I can even get full text search!
Only feature I'm missing is compression, which is complicated because for popular extensions like sqlite-zstd written in Rust.
For my own parser, I made everything `\` escaped: outside of a quote or double-quote delimited string, any character prefixed with a `\` is read verbatim. There are no special exceptions resulting in `\,` producing a comma while `\a` produces `\a`. This makes it a good rule, because it is only one rule with no exceptions.
Regarding quoting and escaping, there are two options that make sense to me - either use quoting, in which case quotes are self-escaped and that's that; or use escaping, in which case quotes aren't necessary at all.
I don't know why so many people think every solution needs to to be a perfect fit for every problem in order to be viable. CSV is good at certain things, so use it for those things! And for anything it's not good at, use something else!
You don't always get to pick the format in which data is provided to you.
CSV can just as easily support escaping as any other format, but there is no agreement for a CSV format.
After all, a missed escape can just as easily destroy a JSON or XML structure. And parallel processing of text is already a little sketchy simply because UTF-8 exists.
I mean, you don't usually parallelize reading a file in the first place, only processing what you've already read and parsed. So read each record in one process and then add it to a multiprocessing queue for multiple processes to handle.
And data corruption is data corruption. If a movie I'm watching has a corrupted bit I don't mind a visual glitch and I want it to keep playing. But with a CSV I want to fix the problem, not ignore a record.
Do you really have a use case where reading itself is the performance bottleneck and you need to parallelize reading by starting at different file offsets? I know that multiple processes can read faster from certain high-end SSD's than just one process, but that's a level of performance optimization that is pretty extraordinary. I'm kind of curious what it is!
Because of the "non-local" effect of quotes, you can't just jump into the middle of a file and start reading it, because you can't tell whether you're inside a quoted section or not. If (big if) you know something about the structure of the data, you might be able to guess. So that's why I said "tricky" instead of "impossible".
Contrast to my escaping-only strategy, where you can jump into the middle of a file and fully understand your context by looking one char on either side.
> Do you really have a use case where reading itself is the performance bottleneck and you need to parallelize reading by starting at different file offsets? I know that multiple processes can read faster from certain high-end SSD's than just one process, but that's a level of performance optimization that is pretty extraordinary. I'm kind of curious what it is!
I used to be a data analyst at a management consultancy. A very common scenario would be that I'm handed a multi-gigabyte CSV and told to "import the data". No spec, no schema, no nothing. Data loss or corruption is totally unacceptable, because we were highly risk-sensitive. So step 1 is to go through the whole thing trying to determine field types by testing them. Does column 3 always parse as a timestamp? Great, we'll call it a timestamp. That kind of thing. In that case, it's great to be able to parallelise reading.
> And data corruption is data corruption
Agreed, but I prefer data corruption which messes up one field, not data corruption which makes my importer sit there for 5 minutes thinking the whole file is a 10GB string value and then throw "EOF in quoted field".
And the fix to allow parallel reading is pretty trivial: escape new lines so that you can just keep reading until the first unescaped new line and start at that record.
It is particularly helpful if you are distributing work across machines, but even in the single machine case, it's simpler to tell a bunch of workers their offset/limit in a file.
It's not unsolvable, but now you have a more complicated system.
A better file format would not have this problem.
The fix is also trivial (escape new lines into \n or similar) would also make the files easier to view with a text editor.
There's a dearth of good resources about this around the web, this is the best I've come across: https://georgemauer.net/2017/10/07/csv-injection.html
"don't trust customer input without stripping or escaping it" feels obvious, but I don't think it stands up to scrutiny. What exactly do you strip or escape when you're trying to prevent an unknown multitude of legacy spreadsheet clients that you don't control from mishandling data in an unknown variety of ways? How do you know you're not disrupting downstream customer data flows with your escaping? The core issue, as I understand it, stems from possible unintended formula execution – which can be prevented by prefixing certain cells with a space or some invisible character (mentioned in the linked post above). This _does_ modify customer data, but hopefully in a way that unobtrusive enough to be acceptable. All in all, it seems to be a problem without a perfect solution.
Definitely agree there's no perfect solution. There's some escaping that seems to work ok, but that's going to break CSV-imports.
An imperfect solutions is that applications should be designed with task-driven UIs so that they know the intended purpose of a CSV export and can make the decision to escape/not escape then. Libraries can help drive this by designing their interfaces in a similar manner. Something like `export_csv_for_eventual_import()`, `export_csv_for_spreadsheet_viewing()`.
Another imperfect solution would be to ... ugh...generate exports in Excel format rather than CSV. I know, I know, but it does solve the problem.
Or we could just get everyone in the world to switch to emacs csv-mode as a csv viewer. I'm down with that as well.
The intention-based philosophy of all this makes a lot of sense, was eye opening, and I agree it should be the first approach. Unfortunately after considering our use cases, we quickly realized that we'd have no way of knowing how customers intend to use the csv exports they've requested - we've talked to some of them and it's a mix. We could approach things case by case but we really just want a setup which works well 99% of the time and mitigates known risk. We settled on the prefixing approach and have yet to receive any complaints about it, specifically using a space character with the mind that something unobtrusive (eg. easily strippable) but also visible, would be best - to avoid quirks stemming from something completely hidden.
Thank again for your writing and thoughts, like I said above I haven't found much else of quality on the topic.
Or you could just use the ISO standard .xlsx, which is a widely supported format that is not Excel specific but has first class support in Excel.
That's honestly a good solution and the end users prefer it anyway.
Then, the user doesn’t have Excel has the default program for opening the file and has to jump through a couple safety hoops
Nah, they just quickly learn to rename it .csv or .xls and excel will open it
I need to interface with a lot of non-technical people who exclusively use Excel. I give them .xlsx files. It's just as easy to export .xlsx as it is to export .CSV and my customers are happy.
If xlsx works for all your use cases that's great, a much better solution that trying to sidestep these issues by lightly modifying the data. It's not an option for us, and (I'd imagine) a large contingent of export tools which can't make assumptions about downstream usage.
I responded that this was Excel's problem, not ours, and that nobody would assign a CVE to our product for such a "vulnerability". How naive I was! They forwarded me several such CVEs assigned to products that create CSVs that are "unsafe" for Excel.
Terrible precedent. Ridiculous security theater.
I agree with the characterization ("security theater") of these bug reports. The problem is that the intentions of these reports don't make the potential risk less real, depending on the setting, and I worry that the "You're just looking for attention" reaction (a very fair one!) leads to a concerning downplaying of this issue across the web.
As a library author, I agree this very well may not be something that needs to be addressed. But as someone working in a company responsible for customers, employees, and their sensitive information, disregarding this issue disregards the reality of the tools these people will invariably use, downstream of software we _are_ responsible for. Aiming to make this downstream activity as safe as possible seems like a worthy goal.
Many of the criticisms of CSV I'm reading here boil down to something like: CSV has no authoritative standard, and everyone implements it differently, which makes it bad as a data interchange format.
I agree with those criticisms when I imagine them from the perspective of a user who is not also a programmer. If this user exports a CSV from one program, and then tries to load the CSV into a different program, but it fails, then what good is CSV to them?
But from the perspective of a programmer, CSV is great. If a client gives me data to load into some app I'm building for them, then I am very happy when it is in a CSV format, because I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file.
Parsing CSV is quick and fun if you only care about parsing one specific file. And that's the key: It's so quick and fun, that it enables you to just parse anew each time you have to deal with some CSV file. It just doesn't take very long to look at the file, write a row-processing loop, and debug it against the file.
The beauty of CSV isn't that it's easy to write a General CSV Parser that parses every CSV file in the wild, but rather that its easy to write specific CSV parsers on the spot.
Going back to our non-programmer user's problem, and revisiting it as a programmer, the situation is now different. If I, a programmer, export a CSV file from one program, and it fails to import into some other program, then as long as I have an example of the CSV format the importing program wants, I can quickly write a translator program to convert between the formats.
There's something so appealing about to me about simple-to-parse-by-hand data formats. They are very empowering to a programmer.
This is fine if you can hand-check all the data, or if you are okay if two offsetting errors happen to corrupt a portion of the data without affecting all of it.
Also I find it odd that you call it "easy" to write custom code to parse CSV files and translate between CSV formats. If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy." When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.
Like, I get that a farmer a couple hundred years ago would describe plowing a field with a horse as "easy," but given the emergence of alternatives, you wouldn't use the word in that context anymore.
But it isn't that reliably easy with JSON. Sometimes I have clients give me data that I just have to work with, as-is. Maybe it was invalid JSON spat out by some programmer or tool long ago. Maybe it's just from a different department than my contact, which might delay things for days before the bureaucracy gets me a (hopefully) valid JSON.
I consider CSV's level of "easy" more reliable.
And even valid JSON can be less easy. I've had experiences where writing the high-level parsing for some JSON file, in terms of a JSON library, was less easy and more time-consuming than writing a custom CSV parser.
Subjectively, I think programming a CSV parser from basic programming primitives is just more fun and appealing than programming in terms of a JSON library or XML library. And I find the CSV code is often simpler and quicker to write.
But the premise of CSV is so simple, that there are only four quirks to empirically discover: cell delimiter, row delimiter, quote, escaped-quote.
I think it's "easy" to peek at the file and say, "Oh, they use semicolon cell delimiters."
And it's likewise "easy" to write the "custom logic", which is about as simple as parsing something directly from a text stream gets. I typically have to stop and think a minute about the quoting, but it's not that bad.
If a programmer is practiced at parsing from a text stream (a powerful, general skill that is worth exercising), than I think it is reasonable to think they might find parsing CSV by hand to be easier and quicker than parsing JSON (etc.) with a library.
The worst part about CSV, anyone can ride a parser in about 30 minutes, meaning that it's very easy to get incorrect implementations, incorrect data, and other strange undefined behaviors. But to be clear json, and yaml also have issues with everyone trying to reinvent the wheel constantly. XML is rather ugly, but it seems to be the most resilient.
Namespaces are used to qualify tags and attributes in XML elements. But they're also used by XSD to qualify the names of types defined in the schema. A sequence element's type is indicated by the value of its "type" attribute. The attribute value is a string that is the namespace-qualified name of the type.
So, if you want to change the alias of an XML namespace in an XSD schema, you can't just use your XML library's facilities for namespace management. You also have to go find the "type" attributes (but not all of the "type" attributes), parse their values, and do the corresponding alias change in the type name.
Don't use a string for a thing that is not a string! I guess in XML attributes you have no choice. XAML improved on the situation a bit.
[1]: https://github.com/dgoffredo/stag/tree/master/src/stag/xsd-u...
... What makes the GP's problem so much more amusing. XML was the last place I'd expect to see it.
Rest has this same issue.
I've seen this when trying to integrate with 3rd party apis.
Status Code 200 Body: Sorry bro, no data.
Even then, this is subject to debate. Should a 404 only be used when the endpoint doesn't exist ? When we have no data to return, etc.
At least unless you use application/problem+json or application/problem+xml MIME types but those are still just in draft stage
https://datatracker.ietf.org/doc/html/rfc9457#name-the-probl...
So I cannot trust XML in depth, and depend on using a library that bought the spec and hopefully adheres to it.
They're not without their drawbacks, like no official standards etc, but they do their job well.
I will be bookmarking this like I have the ini critique of toml: https://github.com/madmurphy/libconfini/wiki/An-INI-critique...
I think the first line of the toml critique applies to CSV: it's a federation of dialects.
Until I worked quite a bit with them and realized that there's always schema in the data, otherwise it's just random noise. The question is who maintains the schema, you or a dbms.
Re. formats -- the usefulness comes from features (like format enforcing). E.g. you may skip .ini at all and just go with lines on text files, but somewhere you still need to convert those lines to your data, there's no way around it, the question is who's going to do that (and report sane error messages).
My experience has indicated the exact opposite. CSVs are the only "structured" format nobody can claim to parse 100% (ok probably not true thinking about html etc, just take this as hyperbole.) Just use a well-specified format and save your brain-cells.
Occasionally, we must work with people who can only export to csv. This does not imply csv is a reasonable way to represent data compared to other options.
You don't need to though since in most cases you just need to support whatever CSV format the tool you're handling, unless of course you're trying to write the next Excel/Google Sheets competitor.
I can tell an MBA guy to upload a CSV file and i'll take care of it. Imagine i tell him i need everything in a PARQUET file!!! I'm no longer a team player.
Do the same with a .CSV file and you'll have to teach those people how to use the .CSV importer in Excel and also how to set up the data types for each column etc. It's a non trivial problem that forces you down to a few million people.
.CSV is a niche format for inexperienced software developers.
Thanks
(I'm assuming the technical person can easily write a basic parsing script for the CSV data - which can flag, if not fix, most of the format problems.)
For a dataset of any size, my experience is that most of the time & effort goes into handling records which do not comply with the non-technical person's beliefs about their data. Which data came from (say) an old customer database - and between bugs in the db software, and abuse by frustrated, lazy, or just ill-trained CSR's, there are all sorts of "interesting" things, which need cleaning up.
On top of all the problems people mention here involving the precise definition of the format and quoting, it's outright shocking how long it takes to parse ASCII numbers into floating point. One thing that stuck with me from grad school was that you could do a huge number of FLOPS on a matrix in the time it would take to serialize and deserialize it to aSCII.
Maybe be circular but: always loads correctly into Excel, if you want to load into a spreadsheet you can add text formatting and even formulas, checkboxes and stuff which can be a lot of fun.
I've since started looking at parquet files – which turned out to not be friendly to append-only operations. I've ended up implementing writing events into ipc files which then periodically get "flushed" into the parquet files. It works and it's efficient – but man is it non-trivial to implement properly!
My point here is: for a regular developer – CSV (or jsonl) is still the king.
Did you try setting `PRAGMA synchronous=FULL` on your connection? This forces fsync() after writes.
That should be all that's required if you're using an NVMe SSD.
But I believe most microSD cards do not even respect fsync() calls properly and so there's technically no way to handle power offs safely, regardless of what software you use.
I use SanDisk High Endurance SD cards because I believe (but have not fully tested) that they handle fsync() properly. But I think you have to buy "industrial" SD cards to get real power fail protection.
SQLite was fine until the realities of that environment hit.
0) I need to save the most data over time and my power budget is unpredictable due to environmentals. 1) When should I commit? SQLite commit per insert slows down, impacts battery life, impacts sample rate. Practically you could get away with batching all data for a small period. 2) SQLite is slow to repair databases. Partially written file would often take longer to repair than we had battery to run.
CSV based format filled that niche. First column was line-column count to support firmware upgrades. Last column is line-checksum. Another column indicating if this line was the last for an event. Parser skips corrupted lines/entries.
I think the industry standard for supporting this is something like iceberg or delta, it's not very lightweight, but if you're doing anything non-trivial, it's the next logical move.
For a long time I was very wary of CSV until I learnt Python and started using it's excellent csv standard library module.
So I largely stuck to the Python standard library where I could, and most of the operations I had at the time did not require data analysis on a server, that was mostly done in a database. Often the job was validating and transforming the data to then insert it into a database.
As the Python packaging ecosystem matured and I found I could easily use Pandas everywhere it just wasn't my first thing I'd reach to. And occasionally it'd be very helpful to iterate through them with the csv module, only taking a few MBs of memory, vs. loading the entire dataset into memory with Pandas.
Compare this CSV
field1,field2,fieldN
"value (0,0)","value (0,1)","value (0,n)"
"value (1,0)","value (1,1)","value (1,n)"
"value (2,0)","value (2,1)","value (2,n)"
To the directly-equivalent JSON [["field1","field2","fieldN"],
["value (0,0)","value (0,1)","value (0,n)"],
["value (1,0)","value (1,1)","value (1,n)"],
["value (2,0)","value (2,1)","value (2,n)"]]
The JSON version is only marginally bigger (just a few brackets), but those brackets represent the ability to be either simple or complex. This matters because you wind up with terrible ad-hoc nesting in CSV ranging from entries using query string syntax to some entirely custom arrangement. person,val2,val3,valN
fname=john&lname=doe&age=55&children=[jill|jim|joey],v2,v3,vN
And in these cases, JSON's objects are WAY better.Because CSV is so simple, it's common for them to avoid using a parsing/encoding library. Over the years, I've run into this particular kind of issue a bunch.
//outputs `val1,val2,unexpected,comma,valN` which has one too many items
["val1", "val2", "unexpected,comma", "valN"].join(',')
JSON parsers will not only output the expected values every time, but your language likely uses one of the super-efficient SIMD-based parsers under the surface (probably faster than what you are doing with your custom CSV parser).Another point is standardization. Does that .csv file use commas, spaces, semicolons, pipes, etc? Does it use CR,LF, or CRLF? Does it allow escaping quotations? Does it allow quotations to escape commas? Is it utf-8, UCS-2, or something different? JSON doesn't have these issues because these are all laid out in the spec.
JSON is typed. Sure, it's not a LOT of types, but 6 types is better than none.
While JSON isn't perfect (I'd love to see an official updated spec with some additional features), it's generally better than CSV in my experience.
[["field1","field2","fieldN"],
["value (0,0)","value (0,1)","value (0,n)"],
["value (1,0)","value (1,1)","value (1,n)"],
["value (2,0)","value (2,1)","value (2,n)"]]
I think it's a big stretch to use that JSON for comparison. In practice, one is much more likely to see this: [
{
"field1": "value (0,0)",
"field2": "value (0,1)",
"fieldN": "value (0,n)"
},
{
"field1": "value (1,0)",
"field2": "value (1,1)",
"fieldN": "value (1,n)"
},
{
"field1": "value (2,0)",
"field2": "value (2,1)",
"fieldN": "value (2,n)"
}
]
{
headers: ["field1","field2","fieldN"],
values: [["value (0,0)","value (0,1)","value (0,n)"],
["value (1,0)","value (1,1)","value (1,n)"],
["value (2,0)","value (2,1)","value (2,n)"]]
}
A but unfair to compare CSV without parser library to JSON with library.
Part of the problem here is standards. There's a TON of encoding variations all using the same .csv extension. Making a library that can accurately detect exactly which one is correct is a big problem once you leave the handful of most common variants. If you are doing subfield encoding, you are almost certainly on your own with decoding at least part of your system.
JSON has just one standard and everyone adheres to that standard which makes fast libraries possible.
Unless you need appendability, but then you should probably just use NDJSON/JSONL for a lot of cases.
(That is, of course, assuming the file doesn't contain newlines or other tabs inside of fields. The format should use \t \n etc for those. What a missed opportunity.)
And all kinds of other weirdness, right in ascii. Vertical tabs, LOL. Put those in filenames on someone else's computer if you want to fuck with them. Linux and its common file systems are terrifyingly permissive in the character set they allow for file names.
Nobody uses any of that stuff, though.
Which is absolutely awful for interop and does not deserve being hauled as a feature.
"8. Reverse CSV is still valid CSV" is not true if there are header rows for instance.
But really, whether or not CSV is a good format or not comes down to how much control you have over the input you'll be reading. If you have to deal with random CSV from "in the wild", it's pretty rough. If you have some sort of supplier agreement with someone that's providing the data, or you're always parsing data from the same source, it's pretty fine.
As part of code review, the developer of the feature must be able to roundtrip export -> import a realistic test dataset using the same program and workflow that they expect a consumer of the data to use. They have up to one business day to accomplish this task, and are allowed to ask an end user for help. If they don't meet that goal, the PR is sent back to the developer.
What's fascinating about the exercise is that I've bounced as many "clever" hand-rolled CSV exporters (due to edge cases) as other more advanced file formats (due to total incompatibility with every COTS consuming program). All without having to say a word of judgment.
Data export is often a task anchored by humans at one end. Sometimes those humans can work with a better alternative, and it's always worth asking!
Most features listed in the document are also shared by JSONL, which is my favourite format. It compresses really well with gzip or zstd. Compression removes some plain-text advantages, but ripgrep can search compressed files too. Otherwise, you can:
zcat data.jsonl.gz | grep ...
Another advantage of JSONL is that it's easier to chunk into smaller files.This whole thread is an uninformed rehash of bad ideas.
I think sinking into something like delta-lake or iceberg probably makes sense at scale.
But yeah, I definitely agree that CSV is not great.
JSONL when compressed with zstd, most of "expensive if large" disappears as well.
Generating and consuming JSONL can easily be in the GB/s range.
I am saying JSONL is a lower bound format, if you can use something better you should. Data interchange, archiving, transmission, etc. It shouldn't be repeatedly queried.
Parquet, Arrow, sqlite, etc are all better formats.
csvtk: https://bioinf.shenwei.me/csvtk/
gawk: https://www.gnu.org/software/gawk/manual/html_node/Comma-Sep...
awk: https://github.com/onetrueawk/awk?tab=readme-ov-file#csv
For CLI tools, I’m also a big fan of Miller (https://github.com/johnkerl/miller) to filter/modify CSV and other data sources.
csvquote: https://github.com/dbro/csvquote
Especially for use with existing shell text processing tools, eg. cut, sort, wc, etc.
But of course, I am partial ;)
At any medium+ sized company, you’ll find huge amounts of CSVs being passed around, either stitched into ETL pipelines or sent manually between teams/departments.
It’s just so damn adaptable and easy to understand.
Like a rapidly mutating virus, yes.
> and easy to understand.
Gotta disagree there.
For example, one of the CSVs my company shovels around is our Azure billing data. There are several columns that I just have absolutely no idea what the data in them is. There are several columns we discovered are essentially nullable¹ The Hard Way when we got a bill for which, e.g., included a charge that I guess Azure doesn't know what day that charge occurred on? (Or almost anything else about it.)
(If this format is documented anywhere, well, I haven't found the docs.)
Values like "1/1/25" in a "date" column. I mean, I did say it was an Azure-generated CSV, so obviously the bar wasn't exactly high, but then it never is, because anyone wanting to build something with some modicum of reliability, or discoverability, is sending data in some higher-level format, like JSON or Protobuf or almost literally anything but CSV.
If I can never see the format "JSON-in-CSV-(but-we-fucked-up-the-CSV)" ever again, that would spark joy.
(¹after parsing, as CSV obviously lacks "null"; usually, "" is a serialized null.)
I see what you did there.
"You have a REST API? Parquet format available? Delivery via S3? Databricks, you say? No thanks, please send us daily files in zipped CSV format on FTP."
Requires a programmer
> Parquet format
Requires a data engineer
> S3
Requires AWS credentials (api access token and secret key? iam user console login? sso?), AWS SDK, manual text file configuration, custom tooling, etc. I guess with Cyberduck it's easier, but still...
> Databricks
I've never used it but I'm gonna say it's just as proprietary as AWS/S3 but worse.
Anybody with Windows XP can download, extract, and view a zipped CSV file over FTP, with just what comes with Windows. It's familiar, user-friendly, simple to use, portable to any system, compatible with any program. As an almost-normal human being, this is what I want out of computers. Yes the data you have is valuable; why does that mean it should be a pain in the ass?
Financial users live in Excel. If you stick to one locale (unfortunately it will have to be US) then you are OKish.
I've just checked and strings are escaped using the same mechanism for JSON, with backslashes. I should've double-checked against RFC 4180, but thankfully that mechanism isn't currently triggered anywhere for CSV (it's used for log exportation and no data for these triggers that code path). I've also checked the code from other teams and it's just handwritten C++ stream statements inside a loop that doesn't even try to escape data. It also happens to be fine for the same reason (log exportation).
I've also written serializers for JSON, BSON and YAML and they actually output spec-compliant documents, because there's only one spec to pay attention to. CSV isn't a specification, it's a bunch of loosely-related formats that look similar at a glance. There's a reason why fleshed-out CSV parsers usually have a ton of knobs to deal with all the dialects out there (and I've almost added my own by accident), that's simply not a thing for properly specified file formats.
The python3 csv module basically does the job.
To me, CSV is one of the best examples of why Postel's Law is scary. Being a liberal recipient means your work never ends because senders will always find fun new ideas for interpreting the format creatively and keeping you on your toes.
Another Microsoft BS, they should defaults to ENG locale in CSV, do a translation in background. And let user choose, if they want to save as different separator. Excel in every part of world should produce same CSV by default. Bunch of idiots.
Personally I think we missed a trick by not using the ASCII US and RS characters:
Columns separated by \u001F (ASCII unit separator).
Rows separated by \u001E (ASCII record separator).
No escaping needed.
More about this at:
https://successfulsoftware.net/2022/04/30/why-isnt-there-a-d...
Pipe can be useful as a field delimiter. But what do you use as the record delimiter?
Unless you control the producer of the data you are stuck trying to infer the character encoding and transcoding to your destination, and there's no foolproof way of doing that.
Code editors may convert tabs to spaces but are you really editing and saving TSV data files in your code editor?
There are some notes I put together about TSV a few years ago that expand on these points: https://github.com/solidsnack/tsv?tab=readme-ov-file#motivat...
But then some folks came along about 15 years ago screaming about spaces, and they won, so now tabs are 2 or 4 spaces.
The law of unintended consequences strikes again!
Note: not meant to denigrate you space supporters out there.
It is. CSV has been character separated vs comma separated for probably decades now. Most tools you'd use to mess with them allow you to specify which separator character is being used.
Because of this in order to read a plain simple TSV (fields separated by tabs, nothing more) with the Python csv module [2] you need to set the quote character to an improbable value, say € (using the euro sign because HN won't let me use U+1F40D), or just parse it by hand, e.g. row.split('\t').
column -t | less -S
is pretty nice because you can inspect a file or dataset on a server (no X) before downloading, to see if you even want to bother. Or you can pass it along through a series of pipes to just get the rows you want.There’s at least a slight chance that tab separated values will look ok in a text editor (although in general, nope).
https://sharats.me/posts/automating-the-vim-workplace-3/#usi...
which greatly improved my TSV editing experience in vim. Couple that with autocmd TextChangedI and you get realtime recalculation and alignment.
> The column command appeared in 4.3BSD-Reno.
[1]: https://man7.org/linux/man-pages/man1/column.1.html
[2]: https://man.freebsd.org/cgi/man.cgi?query=column&sektion=1
For other use cases I would use newline separated JSON. Is has most of the benefits as written in the article, except the uncompressed file size.
It has a downside though: wherever JSON itself is used, it tends to be a few kilobytes at least (from an API response, for example). If you collect those in a JSONL file the lines tend to get verrrry long and difficult to edit. CSV files are more compact.
JSONL files are a lot easier to work with though. Less headaches.
I wonder how much we have been hindered ourselves by reinventing plain text human-readable formats over the years. CSV -> XML -> JSON -> YAML and that's just the top-level lineage, not counting all the branches everywhere out from these. And the unix folks will be able to name plenty of formats predating all of this.
I don't know what they were thinking, using a UI setting for parsing an interchange format.
There's a way around, IIRC, with the "From text / csv" command, but that looses a lot of the convenience of double-clicking a CSV file in Explorer or whatever to open it in Excel.
That's it, but it's really bad.
From what I have heard, it's still an issue with Excel, although I assume that Windows may handle plain text better these days (I haven't used it in a while)
You need to write an UTF-8 BOM at the beginning (0xEF, 0xBB, 0xBF), if you want to make sure it's recognized as UTF-8.
I was on the Power Query team when we were improving the encoding sniffing. An app can scan ahead i.e. 64kB, but ultimately the user needs to just say what the encoding is. All the Power Query data import dialogs should let you specify the encoding.
It works better if you click to "import the data" instead of just opening the csv file with it, and if you then choose the right data types. But having to do this everytime to make it work is really annoying, esp when you have a lot of columns, plus people can easily get confused with the data types. I have never seen that much confusion eg with macos's numbers.
You can use CSV for interchange, but a duck db import script with the schema should accompany it.
For one thing, it talks about needing only to quote commas and newlines... qotes are usually fine... until they are on either side of the value. then you NEED to quote them as well.
Then there is the question about what exactly "text" is; with all the complications around Unicode, BOM markers, and LTR/RTL text.
"Use the CSV on the Web (CSVW) standard to add metadata to describe the contents and structure of comma-separated values (CSV) data files." — UK Government Digital Service[2][3]
[1] https://www.w3.org/TR/tabular-data-primer/
[2] https://www.gov.uk/government/publications/recommended-open-...
That being said I noticed .parquet as an export format option on Shopify recently and an hopeful more providers offer the choice.
It's worth noting that I only ran into this limitation when working with huge federal campaign finance datasets [1] and trying to do some compute-intensive querying. For 99% of use cases, datasette is a similarly magical piece of software for quickly exploring some CSV files.
1. https://github.com/manifold-systems/manifold/tree/master/man...
If CSV is indeed so horrible - and I do not deny that there can be an improvement - how about the clever data people spec out a format that
Does not require a bizarre C++ RPC struct definition library _both_ to write and to read
Does not invent a clever number encoding scheme that requires native code to decode at any normal speed
Does not use a fancy compression algorithm (or several!) that you need - again - native libraries to decompress
Does not, basically, require you be using C++, Java or Python to be able to do any meaningful work with it
It is not that hard, really - but CSV is better (even though it's terrible) exactly because it does not have all of these clever dependency requirements for clever features piled onto it. I do understand the utility of RLE, number encoding etc. I do not, and will not, understand the utility of Thrift/Avro, zstandard and brotli and whatnot over standard deflate, and custom integer encoding which requires you download half of Apache Commons and libboost to decode. Yes, those help the 5% to 10% of the use cases where massive savings can be realised. It absolutely ruins the experience for the other 90 to 95.
But they also give Parquet and its ilk a very high barrier of entry.
On of the biggest challenges to CSV files is the lack of data types on the header line that could help determine the schema for the table.
For example a file containing customer data might have a column for a Zip Code. Do you make the column type a number or a string? The first thousand rows might have just 5 digit numbers (e.g. 90210) but suddenly get to rows with the expanded format (e.g. 12345-1234) which can't be stored in an integer column.
The second worst thing is that the escape character cannot be determined safely from the document itself.
It makes working with CSV/TSV files super simple.
It's as easy this:
# Import tidyverse after installing it with install.packages("tidyverse")
library(tidyverse)
# Import TSV
dataframe_tsv <- read_tsv("data/FileFullOfDataToBeRead.tsv")
# Import CSV
dataframe_csv <- read_csv("data/FileFullOfDataToBeRead.csv")
# Mangle your data with dplyr, regular expressions, search and replace, drop NA's, you name it.
<code to sanitize all your data>
Multiple libraries exist for R to move data around, change the names of entire columns, change values in every single row with regular expressions, drop any values that have no assigned value, it's the swiss army knife of data. There are also all sorts of things you can do with data in R, from mapping with GPS coordinates to complex scientific graphing with ggplot2 and others.
Here's an example for reading iButton temperature sensor data: https://github.com/hominidae/ibutton_tempsensors/
Notice that in the code you can do the following to skip leading lines by passing it as an argument: skip = 18
cf1h <- read_csv("data/Coldframe_01_High.csv", skip = 18)
Reminds me of how I built a simple query language which does not require quotation marks around strings, this means that you don't need to escape strings in user input anymore and it prevents a whole bunch of security vulnerabilities such as query injections. The only cost was to demand that each token in the query language be separated by a single space. Because if I type 2 spaces after an operator, then the second one will be treated as part of the string; meaning that the string begins with a space. If I see a quotation mark, it's just a normal quotation mark character which is part of the string; no need to escape. If you constrain user input based on its token position within a rigid query structure, you don't need special escape characters. It's amazing how much security has been sacrificed just to have programming languages which collapse space characters between tokens...
It's kind of crazy that we decided that quotation marks are OK to use as special characters within strings, but commas are totally out of bounds... That said, I think Tab Separated Values TSV are even more broadly applicable.
But if you're ingesting data from other organizations, they will, at one time or another, fuck up every single one of those (as well as the ones mentioned in TFA), no matter how clearly you specify them.
For whatever reason, it flawlessly manages to import most CSV data using that functionality. It is the only way I can reliably import data to excel with datestamps / formats.
Just drag/dropping a CSV file onto a spreadsheet, or "open with excel" sucks.
It inserts an extra row at the top for its pivot table, with entries "Column1, Column2, ...".
So if you export to CSV again, you now have 2 header rows.
So Excel can't roundtrip CSVs, and the more often you roundtrip, the more header rows you get.
You need to remember to manually delete the added header row each time, otherwise software you export back to can't read it.
It creates a green/white coloured pivot table -- do you also observe that?
Now if they would just also allow pasting CSV data as “source” it would be great.
This is what keeps me coming back.
In literally any language I can think of, hassle(json) < hassle(CSV), esp. since CSV received is usually "CSV, but I've screwed it up in a specific, annoying way"
But even ergonomically, in python, can read a csv like:
import csv
[row for row in csv.DictReader(f)]
which imo is not less ergonomic than import json
[json.loads(line) for line in f]
This is a double-edged sword. The "you might even event it yourself" simplicity means that in practice lots of different people do end up just inventing their own version rather than standardizing to RFC-4180 or whatever when it comes to "quote values containing commas", values containing quotes, values containing newlines, etc. And the simplicity means these type of non-standard implementations can go completely undetectable until a problematic value happens to be used. Sometimes added complexity that forces paying more attention to standards and quickly surfaces a diversion from those standards is helpful.
Aside from that: Yes, if CSV would be a intentional, defined format, most of us would do something different here and there. But it is not, it is more of a convention that came upon us. CSV „happened“, so to say. No need to defend it more passionate than the fact that we walk on two legs. It could have been much worse and it has surprising advantages against other things that were well thought out before we did it.
I also asked W3C on theirGithub if there was any spec for CSV headers and they said there isn't [1]. Kind of defeats the point of the spec in my opinion.
In my experience (perhaps more niche than yours since you mentioned it has been your day job), the lack of fall back options makes for brittle integrations. Failing entire files due to a borked row can be expensive in terms of time.
Having to ingest large CSV files from legacy systems has made me rethink the value of XML, lol. Types and schemas add complexity for sure, but you get options for dealing with variances in structure and content.
In both cases you'd fail the entire file rather than partial recovery.
RFC4180 is a late attempt at CSV standardization, merely codifying a bunch of sane practices. It also provides a nice specification for generating CSV. But anyone taking care to code from a specification might as well use a proper file format.
The real specification for CSV is as follows: "Valid CSV is whatever is designated as CSV by its emitter". I wish I was joking.
There is literally an infinity of ways CSV can be broken. The developer will bump his head on each as he encounters them, and add a specific fix. After a while, his code will be robust against the local strains of CSV... Until the next mutation is encountered after acquiring yet another company with a bunch of ERP way past their last extended maintainance era, a history of local adaptations and CSV as a message bus.
Way easier to parse
sqlite3 gets it right.
Typical latin fonts divide characters into three heights: short like "e" or "m", tall like "l" or "P" and deep like "j" or "y". As you may notice, letters only use one or two of these three sections.
Pipe is unique in that it uses all three at the same time from the very top to the very bottom. No matter what latin character you put next to it, it remains distinct. This makes the separators relatively easy to spot.
Pipe is a particularly uncommon character in normal text while commas, spaces, semicolons, etc are quite common. This means you don't need to escape it very often. With an escapable pipe, an escapable newline, and unicode escapes ("\|", "\n", and "\uXXXX") you can handle pretty much everything tabular with minimal extra characters or parsing difficulty.
This in turn means that you can theoretically differentiate between different basic types of data stored within each entry without too much difficulty. You could even embed JSON inside it as long as you escape pipes and newlines.
"string"|123|128i8|12.3f64|false|[1,2,3,4]|{key: "val"}|2025-03-26T11:45:46−12:00
Maybe someone should type this up into a .psv file format (maybe it already exists).Less likely to appear in normal data. Of course you have to escape it but at the very least the data looks less noisy.
Of these 3 options sas datasets are my preference but I'll immediately convert to csv or excel, csv is a close 2nd once you confirm the quoting / seperator conventions it's very easy to parse. I understand why someone may find the csv format disagreeable but in my experience the alternatives can be so much worse I don't worry too much about csv files
I looked at the RFC. What is controversial about it?
Yeah, you are spot on with this one (cries in Czech, which used to be encoded in several various ways).
Correctly. A good parser should step through the line one column at a time, and shouldn't even consider newlines that are quoted.
If you're naively splitting the entire file via newline, that isn't 4180's fault, that is your fault for not following the standard or industry norms.
I'll happily concede the UNICODE point however; but I don't know if that makes it controversial.
I much prefer that to any sort of "common institutional memory" that is nevertheless only talked about on random forums. People die, other people enter the field... hello subtle incompatibilities.
It’s sort of like, the computing dream when I was growing up.
+1 to ini files. I like you can mess around with them yourself in notepad. Wish there was a general outline / structure to those though.
The first part of converting data to csv works fine with help of ai coding assistant.
The reverse part of csv to database is getting challenging and even claude sonnet 3.7 is not able to escape newline correctly.
I am now implementation the data format in json which is much simpler.
For numerical data, nothing beats packing floats into blobs.
it's surely simple but..
- where is meta data? - how do you encode binary? - how do you index? - where are relationships? different files?
But god help you if you have to accept CSVs from random people/places, or there's even minor corruption. Now you need an ELT pipeline and manual fix-ups. A real standard is way better for working with disparate groups.
I can depend on parquet. The only real disadvantages with parquet are that they aren't human-readable or mutable, but I can live with that since I can easily load and resave them.
My experience with CSVs is mostly limited to personal projects, and I generally find the format very convenient. That said, I occasionally (about once a year) run into issues that are tricky to resolve.
This is a good blog post and Xan is a really neat terminal tool.
No, CSV is dependently typed. Way cooler ;)
I wrote something about this https://github.com/Ericson2314/baccumulation/blob/main/datab...
9. Excel hates CSV
It clearly means CSV must be doing something right.
This is one area where LibreOffice Calc shines in comparison to Excel. Importing CSVs is much more convenient.You have to replace the "T" separator with a space and also any trailing "Z" UTC suffix (and I think any other timezone/offset as well?) for Excel to be able to parse as a time/date.
How is Excel like an Incel? Both of them think everything is a date.
They did finally add options to turn off the common offenders, but I have a deeply ingrained distrust at this point.
[0] https://www.theverge.com/2020/8/6/21355674/human-genes-renam...
It's not the fault of CSV that Excel changes which file format it uses based on locale.
I need to uncheck File > Option Advanced > Use system separators and set the decimal separator to a dot to get Excel to generate English-style CSV files with semicolon-separated values. I can't be bothered to find out where Microsoft moved the CSV export dialog again in the latest version of Office to get it to spit out comma-separated fields.
Point is, CSV is a term for a bunch of loosely-related formats that depends among other things on the locale. In other words, it's a mess. Any sane file format either mandates a canonical textual representation for numbers independent of locale (like JSON) or uses binary (like BSON).
It's not though, is what I'm saying. It's saving semicolon-separated files, not CSV files. CSV files have commas separating the values. Saying that Excel saves "semicolon-separated CSV files" is nonsensical.
I can save binary data in a .txt file, that doesn't make it a "text file with binary data"; it's a binary file with a stupid name.
This format is not comma separated values. But Excel calls it CSV.
The headaches comes if people assume that a csv file must be comma separated.
There are office programs that save CSV with the proper comma delimiter regardless of the locale.
There are people who run non-local locales for various good reasons.
There are technically savvy people who have to deal with CSV shenanigans and can and will send it with the proper comma delimiter.
Then I go do it somewhere else and have to set it up all over again. Drives me nuts. How the default behavior isn't to just put them in the way you'd expect is mind-boggling.
Use a different separator if you need to.
CSV is the Vim of formats. If you get a CSV from 1970 you can still load it.
It was pretty straightforward (although tedious) to write custom CSV data exports in embedded C, with ZERO dependencies.
I know, I know, only old boomers care about removing pip from their code dev process, but, I'm an old boomer, so it was a great feature for me.
Straight out of libc I was able to dump data in real-time, that everyone on the latest malware OSes was able to import and analyze.
CSV is awesome!
The following are all valid CSV, and they should all mean the same thing, depending on your point of view:
1) foo, bar, foobar
2) "foo", "bar", "foobar"
3) "foo", bar, foobar
4) foo; bar; "foobar"
5) foo<TAB>bar<TAB>"foobar"
5) foo<EOL> bar<EOL foobar<EOL>
Have fun writing that parser!
Honestly if there is no comma to separate the values, then its not csv maybe Csv for character separate values or asv for anything separates values but you're right, this makes it hard how everyone is doing whatever. IMV supporting "" makes supporting anything else redundant.
Most reasonably large CSV files will have issues parsing on another system.
I suspect its 1) people who worked with legacy systems AND LIKED IT, or 2) people who never worked with legacy systems before and need to rediscover painful old lessons for themselves.
It feels like trying to convince someone, why its a bad idea to store the year as a CHAR(2) in 1999, unsuccessfully.
but at least in my case, I would not like to use those characters because they are cumbersome to work with in a text editor. I like very much to be able to type out CSV columns and rows quickly, when I need to.
Does it though? Seems to be importing from and exporting to CSV just fine? Elaborate maybe.
Never want to use sas7bdat again.
CSV is lovely. It isn't trying to be cool or legendary. It works for the reasons the author proposes, but isn't trying to go further.
I work in a work of VERY low power devices and CSV sometimes is all you need for a good time.
If it doesn't need to be complicated, it shouldn't be. There are always times when I think to myself CSV fits and that is what makes it a legend. Are those times when I want to parallelise or deal with gigs of data in one sitting. Nope. There are more complex formats for that. CSV has a place in my heart too.
Thanks for reminding me of the beauty of this legendary format... :)
> CSV is not a binary format, can be opened with any text editor and does not require any specialized program to be read. This means, by extension, that it can both be read and edited by humans directly, somehow.
This is why you should run screaming when someone says they have to integrate via CSV. It's because they want to do this.
Nobody is "pretending CSV is dead." It'll never die, because some people insist on sending hand-edited, unvalidated data files to your system and not checking for the outcome until mid-morning the next day when they notice that the text selling their product is garbled. Then they will frantically demand that you fix it in the middle of the day, and they will demand that your system be "smarter" about processing their syntactically invalid files.
Seriously. I've worked on systems that took CSV files. I inherited a system in which close to twenty "enhancement requests" had been accepted, implemented, and deployed to production that were requests to ignore and fix up different syntactical errors, because the engineer who owned it was naive enough to take the customer complaints at face value. For one customer, he wrote code that guessed at where to insert a quote to make an invalid line valid. (This turned out to be a popular request, so it was enabled for multiple customers.) For another customer, he added code that ignored quoting on newlines. Seriously, if we encountered a properly quoted newline, we were supposed to ignore the quoting, interpret it as the end of the line, and implicitly append however many commas were required to make the number of fields correct. Since he actually was using a CSV parsing library, he did all of this in code that would pre-process each line, parse the line using the library, look at the error message, attempt to fix up the line, GOTO 10. All of these steps were heavily branched based on the customer id.
The first thing I did when I inherited that work was make it clear to my boss how much time we were spending on CSV parsing bullshit because customers were sending us invalid files and acting like we were responsible, and he started looking at how much revenue we were making from different companies and sending them ultimatums. No surprise, the customers who insisted on sending CSVs were mostly small-time, and the ones who decided to end their contracts rather than get their shit together were the least lucrative of all.
> column-oriented data formats ... are not able to stream files row by row
I'll let this one speak for itself.