This article seems written by someone who never had to work with diverse data pipelines.
I work with large volumes of data from many different sources. I’m lucky to get them to send csv. Of course there are better formats, but all these sources aren’t able to agree on some successful format.
Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.
I’ve been reading these “everyone is stupid, why don’t they just do the simple, right thing and I don’t understand the real reason for success” articles for so long it just makes me think the author doesn’t have a mentor or an editor with deep experience.
It’s like arguing how much mp3 sucks and how we should all just use flac.
The author means well, I’m sure. Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation.
Totally agree. His arguments are basically "performance!" (which is honestly not important to 99% of CSV export users) and "It's underderspecified!" And while I can agree with the second, at least partly, in the real world the spec is essentially "Can you import it to Excel?". I'm amazed at how much programmers can discount "It already works pretty much everywhere" for the sake of more esoteric improvements.
All that said (and perhaps counter to what I said), I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters, so you don't have to have complicated quoting/escaping logic, and it also supports multiple sheets (i.e. a workbook) in a single file.
> in the real world the spec is essentially "Can you import it to Excel?"
And the answer to that is always no. You will it think it's yes because it works for you, but when you send it to someone who has a different Excel version or simply different regional settings, it won't work. The recipient will first have to figure out what dialect you used to export.
Oh absolutely. Don't forget about the giant mess Microsoft made in countries like the Netherlands, where Excel will refuse to automatically open comma-separated values files (you know, CSV), unless the separator is a semicolon — because someone in the past thought that was how the Dutch did it.
You want people to be able to open your nice and sensible CSV files in Excel? First figure out which arcane locale specific settings you need, then generate those monstrosities and annoy anyone who genuinely expected Unicode comma-separated values.
My solution was to just write a generic spreadsheet writer component in our API and have it generate CSV (normal, comma-separated, UTF-8), XLSX, or ODS. Anyone using Excel just grabs the XLSX output. Generating XLSX and ODS was just a matter of saving a minimal example file for each and figuring out where to write cells and rows. No extra library needed (both file formats are just zipped up XML files), and everybody is happy.
Many countries use the comma as decimal separator, and Microsoft in its infinite wisdom thinks that data interchange formats should follow regional formatting settings (that's unbelievably stupid; I'll never understand how such an enormous error not only came to be, but was never corrected). That makes the comma unusable as column separator in those countries for exchange of numerical data.
US users have the same in my experience. I've had multiple clients complain the exports are broken, because Excel only imports them (perfectly) but doesn't allow to open them. I thought it was their way of forcing excel files.
Iphone & Google Drive no issues.
"Always no"? What. Really not sure what you mean here when you agree that the possibility of it working exists.
I get the sentiment -- when I request data through FOIA, I will almost always request it as "an excel format" because I know that I'll at least be able to import it. CSV is much less of a guarantee and will have issues -- missing quotes, wrong delimiters, inconsistent column counts, things like that. So requesting "an excel format" implies "make the best effort to give me something that will load in my excel, but without asking what version of excel I have". Removes a fair amount of hassle, especially when it took months to get the data. It also means that if they fuck up the columns by doing a conversion, you have some means of telling them that the data is simply wrong, rather than the data is hard to work with. It does mean dealing with [0-9]+GB sized excel files sometimes, though.
That all said, I prefer to share CSV files. Haven't had much of a problem with it and I can guarantee some consistency from my side. Like, the people I share files with aren't going to know what the hell a parquet file is. A CSV though? Just double click it if you're less technical, or open it up in less if you can use a terminal. It usually compresses well, despite what the author wrote.
I've been amazed by how much better LibreOffice is at importing CSVs in a sane manner than Excel. Its CSV import prompt is nothing short of the gold standard and puts Excel to shame.
Also, even if the CSV format is completely valid, Excel will still find a way to misinterpret some of your cells in baffling ways, destroying the original data in the process if you don't notice it in time.
The root cause of a lot of problems is that Excel's CSV import is garbage.
Someone should write a simple utility modelled on LibreOffice's CSV import dialog that reads in a CSV file and outputs its content as an XLSX file with all the cells formatted as text. Being as how XLSX files are just XML text in a renamed ZIP file and CSV is a very simple format such a project could be written over a weekend.
Network admins could then create a group policy to reassign the CSV file extension to open with the utility instead of Excel. I guess the utility could automatically open the generated XLSX in Excel as well.
This would fix so many data corruption issues across the whole world.
Microsoft themselves could even do this as a powertoy.
At this point i suspect excel is as dangerous as powerpoint to the quest for sharing and improving human knowledge, in that it gives the illusion of allowing people to analyze data but not the tools to make sure they get their data and model lined up correctly.
Oh oh oh, I have a story about a quick setup. Sent a csv file to someone in the same org. Guy said that it was not opening. Wanted me to come to their office to see. I told them that IT support should fix it, since I can't, and every machine on my OU could read it. I was the bad guy. Yeah, quick setup my ass. Users can't be arsed to understand that their setup isn't properly configured.
Yeah, I definitely prefer CSV over Excel. Excel messes with your data. CSV is what it is. Sure, you may not know exactly what that is, and plenty of times I've had to figure out manually how to handle a specific CSV export, but once I've figured that out, it will work. With Excel, there's always the risk that a new value for a string field could also be interpreted as a date.
I am not in that universe, on the contrary I try to stay as far away as possible. And I agree with you. However I think the everything-done-in-OLD-Excel universe is worse. Some people will be terminally stuck in Excel but at least they can use new Excel capabilities instead of being stuck with the Excel of 20 years ago.
So why remain stuck importing CSVs into Excel when you can use Power Query to import Parquet. Why remain stuck using VBA in Excel when you can use Python in Excel.
I do not think an Excel user can be convinced to move to things like Jupyter, R, databases, etc. since they won't even make the jump to Access but maybe they can be convinced to use modern features of Excel.
The answer is "always no" because the question is inherently underspecified, precisely because importing things into Excel is more complicated and context-dependent than it appears on the surface.
No you don’t. It’s a holdover from when files were on tapes. The logic is all inverted too. Record separators were at the beginning of a record. Group and unit separators could then be nested. You really needed a spec for whatever document you were trying to makt
You need a "spec" just the same for a CSV: does it have headers, is there a consistent column ordering, etc. Control characters work the exact same as commas and newlines, but you don't have to worry about escaping them because (unless your data is binary or recursive) the control characters won't show up in the data!
The POSIX spec defines portable and non-portable ASCII characters, prudently placing the separators in the non-portable set. In order to nest tables, base64 (or whatever portable encoding) code the table into the field. This works much better, easier and more error-free than any escaping strategy.
Regarding visibility in editors, if you are nesting tables I don't think you care too much about manual editing, but if you do, it is easy to setup vim/emacs/vscode to display the ASCII separators. I am being told even notepad++ can display those, there are no excuses.
Not seeing them in the wild is good, it means they will work when you use them. The more they get used the more often you'll find they crop up in the text fields you're trying to separate and the whole csv escaping nightmare will continue.
ASCII has characters for unit, record, group and file separator. And a some days ago there was a story here about using the unicode printable representation of these for an editor friendly format.
There are characters from ASCII for delimiting records, which are underused because they cause confusion about whether they should be represented as a state change like a backspace character, or as a glyph. See also: "nobody can agree on the line ending character sequence".
The USV proposal uses additional codepoints introduced in Unicode for the representation of the record delimiters, so they will always look and edit like character glyphs, and nobody is using them for some other purpose. The standardized look of these glyphs is unappealing, and they aren't easy to type, but it's fixable with a font and some editing functions.
There's a lot of tooling which uses them for their intended purpose, which is to represent the C0 control characters in a font, so they can be printed when they appear in a document. Your editor is probably one of those.
Which is why I consider USV a terrible idea. If I see ␇ in a file, I don't want to be constantly wondering if it's "mention ␇" or "use ␇ to represent \x07". That's why the control pictures block exists: to provide pictographs of invisible control characters. Not to make a cutesy hack "look! it's a picture of a control character pretending to be what it isn't!!" format.
I agree about USV, it creates confusion where none needs to exist. For personal use, though, it is not that bad to receive a USV: it should be postmarked ".usv" and in any case if you suspect shenanigans you can `grep` for the offending (literally!) unicode characters and `tr` them into proper ASCII separators. Now, if there is nesting in the USV, I give up.
I share the lament: the whole table issue was solved before it became a problem. POSIX divides ASCII into portable and non-portable characters; only portable characters are allowed in the fields and separators are non-portable. If you need nesting, use a portable encoding of the inner table. This scheme repeats indefinitely without escaping hell or exceptions, preventing tons of errors and headache.
Visibility is such a bizarre complaint. Text editors already handle control characters: they handle tabs, they handle newlines, it is not a tremendous, earth-shattering feature request to make them handle separators gracefully.
Hell, there's ASCII characters specifically for delimiters. 0x1C to 0x1F are respectively defined as file, group, record, and unit separators. Unicode naturally inherits them all.
Gasp. Big news. I do not recall ever seeing this, so I wonder if $JOB is running some hilariously outdated version for compatibility with a load bearing VBA script.
For those unfamiliar with the atrocity that is HL7v2, the format is essentially CSV, but with the record separator set to a lone CR, and the field separator usually set to |. Usually, because the format lets the consumer of the format redefine it, for whatever reasons. (The first use of the field separator it determines whatever character it will be. Thankfully, the first use is in a fixed spot, so it's determinable, but still. Oh, but we don't know the character encoding until like the 18th field in … and it doesn't necessarily have to be an ASCII superset. So I have no idea what an HL7v2 message in a non-ASCII superset even looks like, or how a parser is even supposed to reasonably parse such a thing. I presume attempt a decoding in all possible decodings, and then see which one matches the embedded character set, and pray nobody can create a polyglot?)
There's also further separators, delimiting within a field.
It also has its own escape sequences, to deal with the above.
… and it is what carries an unfortunate amount of medical data, and is generally how providers interoperate, despite the existence of more civilized standards like FHIR
I've unfortunately had to bless my brain with much more of this standard this week, for some reason.
Did I mention that subcomponents (if you look at it like a CSV, cells are further subdivided into components & subcomponents, so subcomponents are sort of where we hit "cell text", if you want to keep going with that broken analogy) — contain escape sequences, so that you can have things like the field separator. Normal stuff, so far. The escape sequences also include highlighting, binary blobs, and a subset of roff.
My worst nightmare was a semicolon-delimited file. Where one of the columns had hand-typed street names - without quotes.. so "WELLS" was often "WE;;S".
Since it was the only column like that, the # of columns to the left of the annoying column and the # on the right would always stay the same. So it was pretty easy to clean.
I can't remember the last time I, or anyone I've ever worked with for that matter, ever typed up a CSV from scratch. The whole point of USV is that the delimiters can't normally be typed so you don't have to worry about escaping.
USV supports displayable delimiters (see https://github.com/SixArm/usv), so for the much more common case of editing an existing CSV in a text editor, you can just copy and paste.
Everyone of us was a beginner at some point. The first time we came across CSV format we likely typed it in notepad by hand. A lot of issues with CSVs are also sometimes troubleshooted by hand-- by manually fixing a quote or a comma.
There is value is the ability to do this level of editing and troubleshooting.
> The first time we came across CSV format we likely typed it in notepad by hand.
Again, I'm not saying CSVs aren't edited by hand in a text editor, I'm saying they aren't created from scratch in a text editor, even by beginners. USVs are easy to edit in a text editor, too, and I tried viewing and editing USVs with a couple different fonts and had no problems.
Nobody can type up a GIF image, or Word document in a Notepad, yet files of both those formats exist. The answer obviously is tooling.
If a format with sane separators was common, so would editors that could edit that format be.
If a format with sane separators was common, so would editors that could edit that format be
Sure, but that's a hypothetical future editor, not something that currently exists.
Edit to add: I also disagree with "sane" in that context. New separators won't solve anything. You'll always need escaping/encoding/encapsulation; get that right and everything else follows. JSON is comma-separated and does just fine.
I've valued the virtue of CSVs being readable by any text editor known to man, and I've occasionally edited them by hand. The pure simplicity of reading and typing commas trumps any value provided by more esoteric configurations.
As for escaping, that's for the subsequent programmers (which could also be me) to figure out. If it is me, I'll deal with it because it keeps things simple.
> Special Unicode Characters(tm) can't be typed by anyone on any keyboard and readable by noone.
While I'm not a fan of USV, I do believe it is type-able on both macOS and Linux¹. The IME's character picker in both OSes contains all of the necessary characters, I think. (I use "" in commit messages, occasionally. That was a typed , not copy/pasted from a Google query or such.)
It would be quite tedious, I do admit; reasonably typed by someone, probably not.
I don't normally type out CSVs by hand either, though.
(¹I don't know about Windows as I simply haven't cared about it in years. For all I know, they've grown an IME in the intervening decade.)
Even if the csv isn’t being typed out by hand, when importing into Excel and the delimiter needs to be manually entered, because it isn’t one of the common ones Excel has a radio button for… it is nice to be able to easily type it.
While I can see a convenience argument for the somewhat contrived case of typing an entire file out by hand, entering the character once for the import does not seem like a great bar.
> it is nice to be able to easily type it.
Again, that's where an IME is helpful; on the OSes I mention, it's "␞" is:
macOS: ⌘+^+Space, "record", <arrows to select>, Enter
Linux: Super+e, "record", Space, Space, <arrows>, Enter
The process is highly visual on both, so you're getting feedback about whether you're about to hit the right character, or not.
(And like, if you have the file, you can always ^C ^V the character, if say you don't know how to IME, or you don't know what the name of the char is, etc.…)
Would it be possible to just type the file using commas, semicolons, or pipes or something (whatever you happen to know you don’t have in your file) and then convert them using sed?
Any character within reason can certainly be entered by way of Character Map in Windows or its equivalent in Linux or MacOS, but if you're arguing that then you don't understand the crux of my argument: Convenience is a virtue.
There is value in the delimiter simply being a key on practically any keyboard in existence. Anything that involves something more complicated than just pushing a single button on a keyboard (this includes IMEs) is a non-starter, because convenience is a virtue.
> Anything that involves something more complicated than just pushing a single button on a keyboard (this includes IMEs)
My point is that this is merely a more stringent argument; it's now "on a keyboard, and cannot involve dead keys, etc." … which now excludes classic CSV, too, which requires two keys to enter a double quote. (Shift+')
Again, it does require more keys, and it is certainly not convenient, but n keys to me is still n keys. The real question is why one isn't using a proper tool to enter the data, and is instead encoding it by hand, which, again, even for a classic CSV, is basically something I've never done. (… because why would one?)
The fact that it is a character on the keyboard is exactly the problem, too. Any character a user can easily enter will definitely end up mixed into your data somewhere.
The IANA standard for TSV already disallows tabs inside fields, so you can skip writing any quoting logic (in principle). The MIME type is `text/tab-separated-values`.
So true. Working with imports/exports in CSV from ERP software. One can't imagine how often "Oh, this import doesn't work. I'll just fix the CSV file" occurs. Try that with some compressed, "esoteric" file, or even CML and users will break it.
Besides all the downsides CSV has, as soon as it's not only machine-machine communication and a human is involved, CSV is just simole enough.
Check out Polars in python if you want some CSV performance lol. I recently got a 5 million row CSV from a 3rd party and I could manipulate columns (filtering, sorting, grouping) in actions that took less than a second. It's an incredible tool.
To me this criticism feels excessive. It feels like the author is describing their frustrations with internal usage of CSVs - there's no mention of customers and non-technical stakeholders at all. I think it goes without saying that Parquet files and other non-human-readable formats are a nonstarter when working with external stakeholders and the last paragraph makes that clear - if the end-user wants CSV, give them CSV.
I also think we shouldn't blindly dismiss the performance drawbacks of CSV when working with data pipelines. At even modest scales it becomes hard to work with very large CSVs because the data often doesn't fit into memory, a problem easily solved by Parquet and other formats assuming you only need a subset.
I deal with gig size csvs all the time and don’t have any performance issues. These aren’t huge files, but decent sized. And most are just a few megs and only thousands to millions of records.
Csv is not very performant, but it doesn’t matter for these use cases.
I’ll also add that I’m not working with the csvs, they are just I/o. So any memory issues are handled by the load process. I certainly don’t use csvs for my internal processes. Just for when someone sends me data or I have to send it back to them.
That being said my workstation is pretty big and can handle 10s of gigs of csv before I care. But that’s usually just for dev or debugging and anything that sticks around will be working with data in some proper store (usually parquet distributed across nodes).
That may be your experience, but certainly not a universal experience (and apparently not the author's, either). In my experience, it's pretty easy to have CSVs (or Parquet files, or whatever) that are tens or hundreds of GBs in size. The space savings from a more modern file format are significant, as is the convenience of being able to specify and download/open only a subset of rows or columns over the network. Most of us don't have workstations with 50GB of RAM, because it's far more cost-effective to use a Cloud VM if you only occasionally need that much memory.
That being said, the real point here is that folks blindly use CSVs for internal-facing processes even though there's no particular reason to, and they have plenty of drawbacks. If you're just building some kind of ETL pipeline why wouldn't you use Parquet? It isn't as if you're opening stuff in Excel.
The author is giving universal advice to all friends.
If the title was “friends in certain circumstances shouldn’t let friends in certain circumstances export to csv.”
Even a laptop with 8gb ram can open a gig csv.
Of course the internals of your etl will use some efficient data structure, but you’d still want to export as csv at some point to get data to other people. Or you want your friends to export csv to get data to you.
If I run a simulation workload it's pretty easy to generate gigabytes of data per second. CSV encoding adds a huge overhead space and time wise, so saving trajectories to disc for later analysis can easily become the bottleneck.
I have had many other situations where CSV was the bottleneck.
I still would default to CSV first in many situations because it's robust and easily inspected by hand.
My dev workstation has 96gb. I don’t work with massive data files so I’ve never really hit my limit. I think the biggest raw data file I’ve opened was 10-20gb.
I very much agree with this. For an integration where you have control over both ends of the pipeline, CSV is not optimal unless there's existing work to build on, and even then it's a legacy choice.
Parquet and Avro are widely supported in backend languages and also in data analysis. I don't think the article is talking about exported-like-a-jpeg, but instead exported-like-a-daily-report-run: the data scientist doing the exporting is probably using R or Pandas instead of Excel, and can reasonably be expected to read https://arrow.apache.org/docs/r/reference/read_parquet.html.
I just responded to that. It isn't the parser that's a buggy. The parser handles the quotes just fine. If it didn't, that would be a serious bug in the `csv` crate that oodles of users would run into all the time. There would be forks over it if it had persisted for that long.
The problem is that `xsv table` doesn't print the parsed contents. It just prints CSV data, but with tabs, and then those tabs are expanded to spaces for alignment. Arguably it ought to print the parsed contents, i.e., with quotes unescaped.
It almost looks like it's doing that because the quotes are removed in one case, but that's only because the CSV writer knows when it doesn't need to write quotes.
Ok this might sound stupid, and a bit unrelated, but you make so many great tools that I can't help but ask. How do you start planning and creating for a tool that needs to "follow standards"(in this case I know CSV is under specified but still!), is it by iteration or do you try to set and build a baseline for all the features a certain tool needs? Or do you just try to go for modularity from the get go even if the problem space is "smaller" for stuff like csv for example.
I don't really have a simple answer unfortunately. Part of it is just following my nose with respect to what I'm interested in. So there's an element of intrinsic motivation. The other part is trying to put myself in the shoes of users to understand what they want/need. I typically do a lot of background research and reading to try and understand what others have done before me and what the pain points are. And iteration plays a role too. The `csv` crate went through a lot of iteration for example.
I think that's about it. It's hard to answer this question in a fully general way unfortunately. But if you want to get into it, maybe you can be the first person who opens a thread on r/burntsushi haha.
Why is this getting downvoted? They're right that the criticism is pretty excessive:
"Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation."
Not really. You can say it's more regular but that's because it sees barely any actual use; if it ever gained popularity it wouldn't stay regular. (And given that pilots speak in set phrases anyway, irregularity isn't really an issue). It's not a great language by any stretch, it's an awkward mismash of four european languages; sure it sounds kind of nice in an Italianate way, but if that's what you want then why not just speak Italian?
> if it ever gained popularity it wouldn't stay regular
This. Utility and purity always pull in opposite directions.
I see those boutique little (programming) languages written by amateur language designers with exotic type systems or “everything is an X” philosophies, and my reaction is to assume that they are useless for anything past toys and experiments.
I know useful language features have been born in that world and then eventually bolted onto mutt languages like Java and Python, but that suits me just fine.
It would stay regular if there was a strict governing body for it that wasn't a Webster-style "whatever people are speaking is the new definition of correct".
English really is a disaster of a language. There was a(nother) great XKCD about it just a few days ago. https://xkcd.com/2907/
They kind of do. In my language there exists a central "governing" body that decides what is correct, and some "incorrect" regionalisms are disappearing because of it.
When that happens, it’s likely more about politics and social status than the governing body.
Ie, the governing body decrees that the regionalisms from the dominant region is the definitive version of the language. But it might be considered cool to speak that way even without a governing body.
Are you referring to french? Because, if anything, french in France has an insane quantity of slangs and has an extremely emergent vocabulary. Much more so than any English speaking country I can think of. Quebec isn't really influenced by the Académie française yet has a much more "correct" usage of the language generally speaking.
Maybe it's a totally different language but still it goes to show that even a very prestigious central authority doesn't make a language better or less prone to diverge. Regardless of the reason, French is evolving much more quickly than English.
But maybe that's beside the point. If someone wants to "learn French" they can learn by the official rules and communicate with other french-speaking people regardless of how many slang variants exist in France. They can also probably watch French television and understand it.
The point of esperonto was to make it easier to learn. French is regular, but extremely complicated. English is complicated and has a million special cases. Both languages are hard enough to master that society starts to judge a person's intelligence by how well they know the rules and special cases.
Correction it's a disaster of at least 4 languages and this is probably why English is so hard to dethrone as it have no strict ownership so everyone is kind of equal in speaking it incorrectly.
Sometimes lack of rigidity is actually an feature that allows for things to sort of work that would be politically impossible if thinks had to be specified formally before being used.
I'm pretty sure the reason English is hard to dethrone is because Britain ~helped~ forced the various colonies to join world commerce using English, so they started teaching it to entire generations as the national second-language, and then because the USA dominated world commerce after that in a sort of "we'll let you in on the game if you speak our language and use our money" sort of way.
> Esperanto is a superior language to English. And English has many flaws.
At least one one perspective, English is superior. That perspective is that you can actually use it in almost any modern situation because it has been tried and tested globally.
Not saying csv doesn’t have its issues, but I don’t think the author made a convincing argument.
A lot of the issues the author brought up didn’t sound that bad and/or it sounds like he never looked at the source data first.
If you’re doing work with large datasets, I think it’s a good practice to at least go and look at the source data briefly to see what to expect.
This will give you a good idea of the format it outputs, data types, some domain context, etc. or some combination thereof and I don’t think it even takes that long.
Also, it reminds me of the arguments against excel in a way. Most people know what a csv is, more or less how to open it, and don’t need too much context when discussing the file. Someone will quickly understand if you tell them the file isn’t delimited properly right away. These are pros that shouldn’t be taken for granted.
Again, I’m not saying csv doesnt have issues or that there aren’t better alternatives, simply that I didn’t find this particular argument convincing.
IME most people don't know that using Excel to open and save a csv will silently mangle data. In our application leading zeros are significant, so we constantly get screwed by people trying to do quick manual edits and breaking the data. If we're lucky it breaks so badly the import fails. It's worse when the mangling results in structurally valid but wrong data.
I think what you’re saying is accurate, but it’s also important to be practical about stuff.
These are pretty well know excel limitations by now.
And really, anyone using excel who is somehow not aware of that limitation is probably not someone yet experienced enough to be working on a larger and/or mission critical dataset to begin with.
Are there exceptions? Sure. You might be tempted to cite the example of the incident where this happened to some biologists not too long ago, but mistakes happen. I’ve seen people make mistakes building android or iPhone using the right (TM) tools.
What is the exact number of mistakes where you make the decision to jump to a new format?
I’m not sure. This does happen eventually, but the author didn’t make a strong case here imo.
> all these sources aren’t able to agree on some successful format.
But the same is true for csv, and they are not readable by everyone since you don't always know how to read them, there is not enough info for that
Also it's not a good reflection on "deep experience" if it leads to reflexive defense of common stupid things people do with wrong analogies (e.g, flac is less efficient, so more like csv)
My entire exoerience with software development has been me bellyaching about how stupidly things are setup, why dont we do it this way instead etc... only to actually set about working on fixing these things and reqlizing its either way harder than I thought, it makes more sense than I thought, or it just plumb isnt worth the effort.
The evervescent suggestions of a brighter more logical, even obvious, solutions, is often a clear indicator of domain inexperience or ignorance.
And it's surprisingly hard for etl departments to export to csv correctly. I mean, if they can't do csv they can't do anything more complicated for sure.
This article seems written by someone who never had to work with diverse data pipelines
I think that's a little unfair, it sounds like the author does have a decent amount of experience working with real-world CSV files:
I remember spending hours trying to identify an issue that caused columns to "shift" around 80% into a 40GB CSV file, and let me tell you, that just isn't fun.
> Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.
If only CSV were CSV, as opposed to some form that's 80-90% CSV by line count with enough oddities to really make the parser ugly and hard to read.
See, the sweet spot isn't something completely unstructured, because then you feel justified in throwing up (your hands) and declaring defeat. The sweet spot is a file that's sufficiently close to being structured you can almost parse it nicely, but has enough bad lines you can't go in and fix them all by hand in a reasonable timeframe, and you can't tell upstream to get their shit in order because it's only a few lines.
> But I’d say the error rate is actually very low, maybe .1-1% and nowhere near 10-20% of data being messed up.
The thing with CSV-related issues is it's usually not a fixed percentage but instead depends on the data.
I work in the VoIP industry so I deal with the FreePBX Asterisk GUI quite often, and it uses CSV as its bulk import/export format. This mostly makes sense as the data is almost entirely (with one notable exception) simple tables that fit nicely in to rows and columns. The issue I run in to most commonly with this is that it doesn't quote numerical fields, and as a result the fields for extension numbers and voicemail PINs can be problematic when they contain one or more leading zeroes. All of the major spreadsheet software I've used defaults to dropping leading zeroes from columns they've decided contain numerical values, and this results in broken data in these cases. It's of course relatively rare for users to choose a voicemail PIN starting with zero and even more rare for extensions to be set up with a leading zero, but both happen regularly enough that I need to remember to manually configure those columns as "Text" when opening an extension export CSV.
Either way, how often the problem occurs depends entirely on the data being sent through this pipeline. Most sites will never see the problem on the extension column, but one of my sites where the company liked a user's extension to be the last four of their DID when they were initially set up 20 years ago has a dozen of them in a row.
Did you actually read the conclusion at the end of the article?
"Of course, we can't conclude that you should never export to CSV. If your users are just going to try to find the quickest way to turn your data into CSV anyway, there's no reason why you shouldn't deliver that. But it's a super fragile file format to use for anything serious like data integration between systems, so stick with something that at the very least has a schema and is more efficient to work with."
It's a premature optimization issue. If you don't have special requirements like IO throughput, or mission critical data accuracy guarantees, be biased towards picking the format that anyone can easily open in a spreadsheet.
"Of course there are better formats, but all these sources aren’t able to agree on some successful format."
It's the same with csv. They come in all kinds of formats because nobody agreed on the standard. Comma separated, semicolon separated, pipe separated, escaped, not escaped.
Everytime I have to deal with csv I first have to figure out how to parse it in code.
So I think the author is right, we must agree on a better format because that is what friends do.
You are also right because it's an illusion to think that this is going to change anythime soon. But who knows..
Every integration I’ve ever worked on has started off with high ideas of APIs and nice data standards. And has eventually devolved into “can we just put a CSV file on an FTP site…”. With the inevitable, “it’s not really CSV…”
I work with large volumes of data from many different sources. I’m lucky to get them to send csv. Of course there are better formats, but all these sources aren’t able to agree on some successful format.
Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.
I’ve been reading these “everyone is stupid, why don’t they just do the simple, right thing and I don’t understand the real reason for success” articles for so long it just makes me think the author doesn’t have a mentor or an editor with deep experience.
It’s like arguing how much mp3 sucks and how we should all just use flac.
The author means well, I’m sure. Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation.