The article talks about reading and parsing CSV data of unknown variants, but then skips to the solution being using a different format altogether. But you can only switch to a different format if you are producing data, not if you are reading it!
And if you are in control of producing data, just produce strict RFC 4180-compliant CSV data and everybody will be able to read it just fine. There is no need to make your reader’s lives difficult by using yet another non-standard data format.
I just had a look at RFC 4180. This is the grammar they suggest:
> file = [header CRLF] record *(CRLF record) [CRLF]
I find it kind of wild that you have to have at least one record. Suppose I have a program that lists the events that occurred on a given day. How do I represent the fact that the program ran successfully but that there weren't any events on that day?
Event - Loging Started
Aditionally, Loging Ended (for the day) allows you to check for intermediate crashes and startup errors, kinda like a pulse / health check
Not necessarily how I'd do it intuitively, but doesn't seem that crazy
Sqlite isn't a standard per-se, but outputting an Sqlite db file, if you're writing the export code, is easy enough, and enough of a standard that I dare say you'd be doing fine to output .db files marked as sqpite.
> And *if you are in control of producing data*, just produce strict RFC 4180-compliant CSV data
The point of the comment was that you likely aren’t in control of producing the data, so the article’s recommendation of using an entirely different format is likely also invalid. I’m not sure what you are arguing against as you seem to actually agree with them.
From a pragmatic viewpoint, the CSVs that I get from finance (usually saved as .xlsx) have the same issues for parsing the data as a CSV. But since the issues are consistent, I can automate conversion from .xlsx to CSV, then process the CSV using awk to eliminate errors in further parsing the CSV (for import, analysis, etc.). Sure, I'm essentially parsing the CSV twice but, because the parsing issues are consistent, I can automate to make the process efficient.
Obviously that wouldn't work for CSVs with different structures, but can be effective in the workplace in certain scenarios.
As long as a human didn't generate the file, all things can be automated.
However, if you ever have the misfortune of dealing with human generated files (particularly Excels) then you will suffer much pain and loss.
I once had to deal with a "CSV" which had not one, not two but 6(!) distinct date formats in the same file. Life as a data scientist kinda sucks sometimes :shrug:.
Before 2010 and UTF-8 everywhere , I regularly had the misfortune of dealing with multi encoding CSVs. Someone got CSVs from multiple sources and catted them together. One source uses ISO 8859-1, another -15, another UTF-8, sometimes a greek or russian or even ebcdic was in there. Fun trying to guess where one stopped and the other begun . Of course, none of them were consistent crlf or escape wise.
This is some next level response where Excel is called a “broken tool”. You may not agree with their choices or design or anything else. But calling the entirety of the product isn’t making a strong case towards the prior point.
Importing data generated from Excel: don't, or force RFC 4180-compliant CSV data.
Exporting data into Excel: provide RFC 4180-compliant CSV data, or just generate minimal XLSX files.
Most Excel users generally don't export to CSV (or practice any data sanity conventions); they seem to believe XLSX is a perfectly fine data exchange format for automated use. (“Oh the import broke? I just added an empty row above the headers, because it looked sloppy.”) Those that do understand that automated data processing means you need to be stricter in what you do in the sheet you are exporting tend to understand how to export proper CSV's from Excel as well.
>Those that do understand that automated data processing means you need to be stricter in what you do in the sheet you are exporting tend to understand how to export proper CSV's from Excel as well
Do you have some recommendations or references about getting data properly out of Excel. I usually avoid using Excel alltogether if possible. But, obviously that is not always an option.
As someone who has excel open all-day every-day: I don't think excel-employees are "producing" data. We are ingesting CSVs/XLSXs, performing modeling/analysis, and then saving as an XLSX.
I don't know anyone who is saving a CSV unless it is the final model output and another system (e.g. TM1) can only ingest the CSV.
If I accidentally save a spreadsheet as a CSV it is a bad day since I probably lost all my formatting, formulas, and additional tabs.
If you’re deep in the Excel world, chances are extremely high that you also have access to SSMS, which has a really, really good data import tool that makes short work of nasty CSV files. The output of this tool doesn’t even have to be SQL Server, it will use any ODBC driver you’ve got installed; you can send the data to Excel or even a new, properly formatted CSV.
And if you want a repeatable package, there is always SSIS.
Look, I would never recommend anyone jump into the Microsoft ecosystem. But when in Rome, do as the Romans do.
There is a screenshot here [1]. The trick is to choose the generic "import data" task rather than "import flat file". Then you have a wizard that lets you play around with the specifications, change things up, look at the errors, go back to tweak things, go forwards to try again, etc. The only improvement I can think of is if you could save what you did as a standalone SSIS package (or whatever).
If you have to use Excel, you have to produce whatever data Excel produces. It all depends on what (explicit or implicit) contracts you have in place with whoever is to consume the data which you produce.
What's bad is when people keep insisting on coming up with new and amazing CSV dialects.
https://www.ietf.org/rfc/rfc4180.txt is very clear about what CSV files are supposed to look like, and the fact that people keep ignoring this for whatever reason, is not the formats problem.
And no, "using another format" is not a solution to this. Because: I can just invent a new DSV dialect. Or a JSON dialect. Or a dialect where the field separator is "0xFF00FF00" and the row separator is the string `DECAFCOFFEE` encoded in EBCDIC, all other characters have to be UTF-32, except for a, b and f, which also need to be EBCDIC encoded.
> For starters, it’s rather unreadable when opened in a text editor. But I bet you don’t really do that with your CSVs all that often anyway!
Wrong. I do that with csv files all the time. In fact I even have an amazing vim plugin just for them [0]. That's pretty much the point of having a plaintext tabular data storage format: That I can view and edit it using standard text wrangling utilities.
---
There is a much simpler solution to this problem: Don't accept broken CSV. If people keep ignoring standards, thats their problem.
"Broken" is a sliding scale, and it's unfeasible to refuse engaging at all times.
If you are a multi-billion dollar company creating a new integration, you can demand that your small supplier provide an RFC-4180 compliant file, and even refuse to process it if its schema or encoding is not conformant.
If you are the small supplier of a multi-billion dollar company, you will absolutely process whatever it is that they send you. If it changes, you will even adapt your processes around it.
TFA proposes a nice format that is efficient to parse and in some ways better than CSV, another ways are not. Use it if you can and makes sense.
I agree up to a point. It is a kind of tug-o-war, and yes, the weight of each side plays an important role there.
Nevertheless, even in projects where my services are talking to something that's bigger, I will, at the very least ask "why cant it be RFC compliant? is there a reason?". And without blowing my own horn overly much, but quite a few systems larger than mine have changed because someone asked that question.
It doesn't matter when it came out, it doesn't matter that it it descriptive. It is the standard, period.
Yes, CSV is much, much older. In fact it predates personal computers. And it went through changes. Again: None of that matters. We have a standard, we should use the standard, and systems should demand the standard.
Standards are meant to ensure minimal-friction interoperability. If systems don't enforce standards, then there is no point in having a standard in the first place.
Yes, but you could argue that web browsers shouldn't accept broken HTML either. But they do, and that's why there are so much broken HTML out there in the wild. Same with broken CSV -- basically people's measure is "if Excel can read it correctly, it's fine" even if not every CSV library in every programming language can.
And again: None of that matters. I am not talking about formalities here, I am talking about technical realities.
Whether it is formally called a standard or no doesn't change the fact that this is the document everyone points at when determining what CSV is and is supposed to look like. So it is de-facto a standard. Call it a "quasi standard" if that makes you happy.
Oh no; I agree with you completely. I just wanted to point out that the document does not disclaim being a “standard”, is just says that it is not an “Internet standard”.
Oh, I am nothing but liberal when it comes to CSV: Clients get the liberty to either have their requests processed, or get a 400 BAD REQUEST
And yes, I am aware that the standard says this. My counter question to that is: How much client-liberty do I have to accept? Where do I draw the line? How much is too much liberty?
And the answer is: there is no answer. Wherever any system draws that line, it's an arbitrary decision; Except for one, which ensures the least surprise and maximum interoperability (aka. the point of a standard): to be "conservative", and simply demand the standard.
I think the suggestion reflected a deep understanding that transitioning from decades of wild-west to standardized in the smooth fashion most likely to succeed would require that strategy.
If you don’t accept whatever some org’s data is encoded with, they won’t consider it a win for standards, or swap out whatever is producing that data for something more compliant. They’ll consider it a bug, and probably use some other more flexible processor.
On the other hand, if you can be flexible enough to allow quirks on import while not perpetuating them on export, eventually you and other software built with the same philosophy standardize the field.
I do think there’s a point where things are standardized enough that you can safely stop doing that—when all the extra quirk code is so rarely used as to be irrelevant—but I’m unsure if we’ve reached it yet. It would be something to actually analyze, though, rather than just a philosophical decision.
> On the other hand, if you can be flexible enough to allow quirks on import while not perpetuating them on export, eventually you and other software built with the same philosophy standardize the field.
How? The only thing I can see happening is perpetuation of sloppy use of standards. "Why, why should I change my |-deliminated CSV dialect that requires a double-semicolon at the end of each row, which is arbitrarily denoted by either \n or \r or \n\r when all those programmers will accomodate me, no matter how little sense it makes to do so?
> I do think there’s a point where things are standardized enough that you can safely stop doing that
I agree. And that point was when someone sat down, and penned RFC-4180
Everything after that point, has to justify why it isn't RFC compliant, not the other way around.
No it isn't in the real world. It's very much your problem if you're the team consuming these files. Try to go tell the head of accounting they need to make all their data rfc4180 compliant see how that goes
> Try to go tell the head of accounting they need to make all their data rfc4180 compliant see how that goes
Fun fact: I did. And not just for accounting systems, but all sorts of data ingestion pipelines. Did it work every time? No. Did it work in many cases? Yes. Is that better? Absolutely.
Here is the thing: If I accept broken CSV, where do I stop? What's next? Next thing my webservice backends have to accept broken HTTP? My JSON-RPC backends have to accept JSON with /*/ style block comments? My ODBC load-balancer has to accept natural language instead of SQL statements (I mean, its the age of the LLM, I could make that possible).
I draw the line at, the source keeps changing how it's broken.
If things are broken, but in a predictable, standard for that source way... uggh but at least it's their standard and if some tweak gets the common tools working for that one standard then everyone can move on and be happy.
This keeps coming up as new people discover what CSVs are. An ancient TEXT data exchange format. The lowest vaguely common denominator. A style of format with flavors software long out of support contract are happy to export data in.
The intent of the format is to be human readable and editable. Sure, Tab characters can be used instead of commas. (TSV files) Yes that's that "" to escape a quote rule. Oh and quoted values are optional, unquoted strings are fine as long as they contain no newline or record separator characters.
Sure, you could make another CSV inspired format which uses the old mainframe control characters; except as keeps getting pointed out, even programmers often don't know how to enter raw flow control characters on their systems. Who even bashes those out these days? I know I have to look it up every time.
Rejoice that the format is so simple, it's all just text which software might convert to numbers or other values as it might desire.
I agree completely. Its simplicity is what gives it staying power.
When I was an undergrad, I had kind of an anal software engineering 101 professor who was treating the course like he was a scrum master. The deliverable was to make some dumb crud app, and a requirement was it used a "database." It was so stupid simple to write a csv to s3 or local disk that I just used that for the entire project. He tried to fail me for not following the requirements, and I had to go to the dean of CS and argue that by definition, a structured data format on a disk is absolutely a database, and I won. I got graded horribly after that though.
> even programmers often don't know how to enter raw flow control characters on their systems.
Yes, but that is because those characters are not meant to be entered directly. DSV values should either be created by a dedicated DSV editor or they should be constructed by a software library. You would rather use a paint program to create an image instead of writing the image's bytes in a text editor.
How many CSVs are generated, edited, or viewed by Notepad.exe and how many by Excel (or Google Sheets)?
I would posit the vast majority of CSVs are generated through some kind of program where you go to File > Export or File > Save As…. In which case doing selecting a drop down with the option for File Format to be TSV or DSV (with the corresponding file extension) would solve a lot of problems. (Or at least if CSVs from Excel were RFC 4810 compliant by default.)
It is nice that text editors are abundantly available and that they can be used for the task. But once the CSV columns get too wide and irregular, then you probably want to reach for a dedicated spreadsheet program, because it is otherwise too hard to figure out which column you are currently reading.
There is still room between a text editor and a full-blown spreadsheet program. New DSV editors could emerge when the DSV format gains popularity.
At the point someone is using a different format, they’ll likely pick something explicitly structured. Like everything from JSON, to Yaml, to Protobufs, or hell even XML.
DSV seems like worst of both worlds. Not really structured, AND also not really viewable/editable by lowest common denominator tooling.
Right, the author skipped right over human-readable TSV files which play nicely with sed/awk/grep/sort pipelines, and are supported by all CSV parsers and spreadsheet software.
TSV is also my go-to when mucking around on the command line. Perfect for noodling with data before you have to put together an Excel file to show to management.
The author seems to ignore the fact that CSV got so popular because it is human readable. If anyone wanted a binary format there’s plenty of them - most better than this DSV.
Also, I’m on a mobile right now, so can’t verify that, but it seems the format is flawed. The reader decodes UTF8 strings after splitting the binary buffer by the delimiter, but I believe the delimiter may be a part of a UTF8 character.
Edit: just checked and there’s actually no chance that the delimiter the author chose would be part of UTF8 encoding of any other character than the delimiter itself
CSV's aren't really readable either though. They're "inspectable", but that's different. So if you want to read them you'll need to either use specific software, or do some preprocessing to align things properly etc ... in which case the extra step of performing a file-wide substitution of the record separator with newlines and unit separator with tabs or sth, isn't a much worse problem.
Also it's a false inheritance relationship. Writer is not related to configuration. You can't make any other useful subclasses for DSV (ok maybe DSVReader, but that's it). At least it should be in the opposite way: an abstract Writer operating on instance configuration attributes and DSVWriter defining these attributes.
Also `self._buffer += chunk` is O(N^2). It starts to bite even for buffers small as 100 bytes. It's ok for an example, but it's an issue for real code. Example at least buffers incomplete record not a whole read chunk (good!). But does only one split at a time (bad).
[Grumpy mode end]
Nevertheless article is very valuable and interesting to read. CSV gotchas are well described.
From what I've seen, the biggest problem isn't with the CSV standard (even though it has a lot of faults), but rather that a lot of software that utilizes CSVs is poorly tested.
I can't tell you how many times I've downloaded a CSV that didn't escape quotes or newlines correctly, or how many times Excel has failed to correctly parse a perfectly valid CSV due to some decades-old quirk.
I know that there are better formats that make these types of situations pop up less, but is a little bit of quality control too much to ask for? If you've tested your software to make sure that it can handle CSVs with line breaks, tabs, and both types of quotes, then you've seemingly done more testing than 90% of the software out there.
On that note, the LibreOffice Calc team deserves major credit for how many different CSV formats it can handle. It's saved my bacon so many times when Excel wasn't up to the task.
I've read a comment here some years ago of someone discovering ASCII field delimiters and excited to use them. They then discovered that those characters are only used in three places: the ASCII spec, their own code, and the data from the first client where he tried to use this solution.
Any file format needs a well-specified escape strategy, because every file format is binary and may contain binary data. CSV is kinda bad not only because, in practice, there's no consensus escaping, but also because we don't communicate what the chosen escaping is!
I think a standard meta header like follows would do wonders to improve interchangeability, without having to communicate the serialization format out-of-band.
To me it's wild that the problem was solved back in the early 1960s (and really, well before that) but everyone just ignored it because of reasons and now we're stuck with a sub-optimal solution.
The only real benefit of CSV (other than that it is widely supported) is that it is easy for humans to read and write. The approach in this article solves the quoting problem, but also removes that benefit. If you have the power to move from CSV, surely JSON would be better if you need to keep the human readable/writable feature. And if you don't need it, there are other more featureful binary formats out there like parquet.
In many dialects there are. Usually you start the line with #.
Comments will happen. If your file format doesn’t allow comments, then people will make up an extension to allow it. This is true even for binary formats.
JSON is often good, but it also has potentially a lot of overhead, depending on how sparse the data is. For sparse data, it might be better. But for not sparse data, it will have the overhead of mentioning attribute names over and over again. Of course you could also have arrays in JSON, not writing attribute names over and over, but then you are basically back to a CSV inside the JSON file ...
>Of course you could also have arrays in JSON, not writing attribute names over and over, but then you are basically back to a CSV inside the JSON file ...
You're confusing the concept of tabular data with the file format. If the most natural way to represent tabular data is through a 2D array, then so be it. The vast majority of people aren't complaining about the fact that they have to hardcode the meaning of "the last name is written into the fifth column", they are cursing that the fifth column has suddenly shifted into the sixth column, because the first name contained a comma.
I like the idea but this is non-standard enough to be just as hard as making a custom format.
In my experience, the best way to handle this is:
1) Use TSV (tab-separated) instead of CSV (most things that export CSV also export TSV). Strip LF characters while reading and assume newlines are CR.
2) If you have a stubborn data source that insists on CSV, convert it to TSV in a pre-process. This could be a separate step or part of your reader as you're reading in the file. That means there's a single place to handle the escaping nonsense, and you can tailor that to each data source.
> If we used 31 as a field delimiter and 30 instead of newlines, we solve every single edge case from above. Why? Because these are non-printing characters that should never appear in a text-stream data set.
I have in fact seen CSV files used as an interchange format for things that include non-plaintext fields. And I've seen nested CSV files.
I had to LOL a bit about this. I built a career that lasted over 30 years writing software that deciphered clients' attempts to produce sales data files in CSV format.
Many times they just couldn't seem to find the comma. Other times there were commas in the item description (unescaped). My favourite though was when files were edited collaboratively using a Mac, Windows and Linux machines - multiple line-end types FTW! Like I said, a long and somewhat inglorious career..
CSVs are a subset of DSVs. So I guess the idea is, that using that specific subset is bad. But then again it sort of does not matter too much, which character is used for separation, at least if it is not a character that is frequently used as part of a cell value, because that would cause a lot of escaping being needed.
About a month ago, somebody posted a link to an interview with Brian Kernighan. About 6 minutes in, he talks about the difficulty of writing a decent CSV parser: https://www.youtube.com/watch?v=_QQ7k5sn2-o
When we get to CSVs I tell my Python students that while the CSV module does do a lot of nice things for you, CSVs are still a minefield and you really have to look at the file in a text editor first if you're not the one who created it.
If you strip the exchangeability from an exchange format, it is useless.
DSVs didn't work with either Google Sheets, nor vim and neither Python – I assume this is the exhaustive list of software the author would have needed support from. The question, then: If no software understands the format, what's the point?
> I first learned about these ASCII delimiters while working with .fec [Federal Election Commission] files.
And then the author instantly chose a different delimiter. Two parties and already two standards. That should have been the final red flag for this proposal.
---
Aside: CSVs have so many problems wit their data format that you have to always verify them anyway.
Germans write 5.000,24€ where an American would write $5,000.24. Date strings. URL-encoded strings. Numbers as strings. Floating numbers.
Solving the delimiter problem accomplishes nothing.
Every time I have to do any major work with CSVs, I re-lament this exact thing.
I think the only way this could ever become more widespread is to fix all the open source tooling so that it's eventually just supported everywhere - then keep evangelizing for... ~30 yrs.
Probably you should also register a new mime type and extension and make it a new thing - don't overload .CSV any further - but make the same tooling support it.
If time could be turned back, a good idea would be to make CSV mean CSV. Not semicolon separated values, not any other thing separated values, but only comma separated values. To not overload the name in the first place.
And I would rename the format to SSV and make semicolon the separator. Comma is a terrible choice, because it's used as the decimal separator in many countries around the world.
People here complaining this guy is suggesting a "new" standard: it's ASCII. It is already a standard, and probably a lot more sensible than others that followed.
I too have wondered why the hell aren't we using those special characters already ever since I discovered their existence
I have found that pandas is much better than the standard library csv library for just importing random CSV files and automatically figuring out what you would want to do most of the time, detecting column headers, dealing with quotes strings, etc.
Are there any commonly-used fonts that display FS GS RS US as cool graphical characters ? If I'm going to use them to structure text, I want them to be visible, and I want them to be clearly distinguishable from the text.
Had a similar challenge when writing alphareader is in GitHub. HN comments helped me to think in multi-byte separators, and one thing is sure no matter which char you choose it will appear in the wrong place at some point.
Nice! Regarding support from third-party software, perhaps it would be worth writing a specification for DSVs. I think that it could ease the adoption from well-known softwares.
The amount of times I have written a simple CSV parser to correctly handle quoted strings and the like is more times than I have digits when I could just pretend its JSON.
Going to make a mental note to try this next time!
Original author writes: >>> "Quick aside: I first learned about these ASCII delimiters while working
with .fec files. For whatever reason, the Federal Election Commission in the
United States also decided that they needed to ditch the comma, but they
landed on using ASCII character 28 which is supposed to be used as a file
separator not a field saparator. I have no idea why they picked that one when
31 was right there. Anyway, the FEC also has a tool called fs2comma.exe that
turns it back into a CSV format, and a couple of years I filed a FOIA request
and got the source code."
I can only speculate on this but in Perl, for fake multimensional arrays
à la `$foo{$x,$y,$z}`[^1], Perl uses ASCII character 28 (U+001C INFORMATION
SEPARATOR FOUR) as its default subscript separator. Perl borrowed this feature
from AWK, which uses the same character by default for the same purpose.
Based on Perl, I initally used that same character for that same purpose in
a project or two. I cannot speculate on why Aho, Weinberger, and/or Kernighan
chose that character. (On or before 1977.)
[^1]: Not to be confused with nested array (or hash) references in Perl,
a truer form of multimensional arrays: `$foo->[$x]->{$y}->[$z]`
Ah yes good old CSV. It's perfectly fine to use for data transfer and there are libraries for (probably) every language that handle it perfectly to spec.
The problem isn't "CSV". The problems come from:
- "excel doesn't like this CSV therefore it's not valid"
- "what do you mean the CSV I sent you is wrong? I used excel"
- "standard? What's a standard? I put info then a comma. That should be good enough for anyone"
CSV, when done right (i.e. following a standard) is a great format. It's human readable, less verbose than, say, JSON, and everybody understands it.
Just have to make sure business people (and less technical technical people) understand that CSV != Excel and vice-versa.
Question: I started with a deliberately convoluted PDF which after much effort I filtered, sorted, reorganized and transferred the 18000 useful lines to a csv. These lines are simple, with dates, indicator and corresponding numbers.
The purpose is to statically analyze the numbers for anomalies or any signs of deviation from expected randomness. I do this all in python3 with various libraries. It seems to be working, but...
What is a more efficient format than csv for this kind of operation?
Edit: I have also preserved all leading zeros by conversion to strings -- csv readers don't care much for leading zeros and simply disappear them, but quotes fix that.
The phones we use at {JOB} can be programmatically controlled by using their proprietary command language, which is just CSV and each command is ended with a new line (Because how else would you do it, packet boundary pfft?).
It's something I've never understood why, why not use something more standard like SIP, or even a more structured message format. Having to parse CSV across N different packet boundaries is a royal PITA
And if you are in control of producing data, just produce strict RFC 4180-compliant CSV data and everybody will be able to read it just fine. There is no need to make your reader’s lives difficult by using yet another non-standard data format.
See also: <https://news.ycombinator.com/item?id=39679753>