Hacker News new | past | comments | ask | show | jobs | submit login

This article seems written by someone who never had to work with diverse data pipelines.

I work with large volumes of data from many different sources. I’m lucky to get them to send csv. Of course there are better formats, but all these sources aren’t able to agree on some successful format.

Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.

I’ve been reading these “everyone is stupid, why don’t they just do the simple, right thing and I don’t understand the real reason for success” articles for so long it just makes me think the author doesn’t have a mentor or an editor with deep experience.

It’s like arguing how much mp3 sucks and how we should all just use flac.

The author means well, I’m sure. Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation.




Totally agree. His arguments are basically "performance!" (which is honestly not important to 99% of CSV export users) and "It's underderspecified!" And while I can agree with the second, at least partly, in the real world the spec is essentially "Can you import it to Excel?". I'm amazed at how much programmers can discount "It already works pretty much everywhere" for the sake of more esoteric improvements.

All that said (and perhaps counter to what I said), I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters, so you don't have to have complicated quoting/escaping logic, and it also supports multiple sheets (i.e. a workbook) in a single file.


> in the real world the spec is essentially "Can you import it to Excel?"

And the answer to that is always no. You will it think it's yes because it works for you, but when you send it to someone who has a different Excel version or simply different regional settings, it won't work. The recipient will first have to figure out what dialect you used to export.


Oh absolutely. Don't forget about the giant mess Microsoft made in countries like the Netherlands, where Excel will refuse to automatically open comma-separated values files (you know, CSV), unless the separator is a semicolon — because someone in the past thought that was how the Dutch did it.

You want people to be able to open your nice and sensible CSV files in Excel? First figure out which arcane locale specific settings you need, then generate those monstrosities and annoy anyone who genuinely expected Unicode comma-separated values.

My solution was to just write a generic spreadsheet writer component in our API and have it generate CSV (normal, comma-separated, UTF-8), XLSX, or ODS. Anyone using Excel just grabs the XLSX output. Generating XLSX and ODS was just a matter of saving a minimal example file for each and figuring out where to write cells and rows. No extra library needed (both file formats are just zipped up XML files), and everybody is happy.


Many countries use the comma as decimal separator, and Microsoft in its infinite wisdom thinks that data interchange formats should follow regional formatting settings (that's unbelievably stupid; I'll never understand how such an enormous error not only came to be, but was never corrected). That makes the comma unusable as column separator in those countries for exchange of numerical data.


US users have the same in my experience. I've had multiple clients complain the exports are broken, because Excel only imports them (perfectly) but doesn't allow to open them. I thought it was their way of forcing excel files. Iphone & Google Drive no issues.


"Always no"? What. Really not sure what you mean here when you agree that the possibility of it working exists.

I get the sentiment -- when I request data through FOIA, I will almost always request it as "an excel format" because I know that I'll at least be able to import it. CSV is much less of a guarantee and will have issues -- missing quotes, wrong delimiters, inconsistent column counts, things like that. So requesting "an excel format" implies "make the best effort to give me something that will load in my excel, but without asking what version of excel I have". Removes a fair amount of hassle, especially when it took months to get the data. It also means that if they fuck up the columns by doing a conversion, you have some means of telling them that the data is simply wrong, rather than the data is hard to work with. It does mean dealing with [0-9]+GB sized excel files sometimes, though.

That all said, I prefer to share CSV files. Haven't had much of a problem with it and I can guarantee some consistency from my side. Like, the people I share files with aren't going to know what the hell a parquet file is. A CSV though? Just double click it if you're less technical, or open it up in less if you can use a terminal. It usually compresses well, despite what the author wrote.


   > when I request data through FOIA
Fascinating. Can you share any details? Did you ever think to share some of your interesting finds here on HN as a submission?



That was a really fascinating story! Thanks for sharing.


Wow, what a treasure trove you’ve got there! I’ve subscribed via RSS, in case anything else comes down the pipe :)


Thank you! Hopefully by the end of the year but these things can get, strange.


I've been amazed by how much better LibreOffice is at importing CSVs in a sane manner than Excel. Its CSV import prompt is nothing short of the gold standard and puts Excel to shame.

Also, even if the CSV format is completely valid, Excel will still find a way to misinterpret some of your cells in baffling ways, destroying the original data in the process if you don't notice it in time.


Yeah, I can complain about LO in many ways, but the way it opens CSV is much better than Excel. It was developed by a dev, that's for sure.


The root cause of a lot of problems is that Excel's CSV import is garbage.

Someone should write a simple utility modelled on LibreOffice's CSV import dialog that reads in a CSV file and outputs its content as an XLSX file with all the cells formatted as text. Being as how XLSX files are just XML text in a renamed ZIP file and CSV is a very simple format such a project could be written over a weekend.

Network admins could then create a group policy to reassign the CSV file extension to open with the utility instead of Excel. I guess the utility could automatically open the generated XLSX in Excel as well.

This would fix so many data corruption issues across the whole world.

Microsoft themselves could even do this as a powertoy.


Heck yes, LibreOffice shines when it comes to that. Excel always threw me curveballs a lot.


Yes the LibreOffice CSV import dialog showing you a what-if preview of what you'd get as you play with different settings, is pure amazing.


At this point i suspect excel is as dangerous as powerpoint to the quest for sharing and improving human knowledge, in that it gives the illusion of allowing people to analyze data but not the tools to make sure they get their data and model lined up correctly.


otoh it could be instrumental precisely because it is flexible and doesn't require a lot of forethought


>> in the real world the spec is essentially "Can you import it to Excel?"

> And the answer to that is always no.

Sorry, you are wrong! You are confusing "No" and "Yes, after a quick setup at most".


Oh oh oh, I have a story about a quick setup. Sent a csv file to someone in the same org. Guy said that it was not opening. Wanted me to come to their office to see. I told them that IT support should fix it, since I can't, and every machine on my OU could read it. I was the bad guy. Yeah, quick setup my ass. Users can't be arsed to understand that their setup isn't properly configured.


Yeah, I definitely prefer CSV over Excel. Excel messes with your data. CSV is what it is. Sure, you may not know exactly what that is, and plenty of times I've had to figure out manually how to handle a specific CSV export, but once I've figured that out, it will work. With Excel, there's always the risk that a new value for a string field could also be interpreted as a date.


On the other hand you can now use Power Query to import perquet data into Excel.


Whatever situation got you into the "Power" universe was bad.

Warning you about M$, you will soon be an enterprise dev.


I am not in that universe, on the contrary I try to stay as far away as possible. And I agree with you. However I think the everything-done-in-OLD-Excel universe is worse. Some people will be terminally stuck in Excel but at least they can use new Excel capabilities instead of being stuck with the Excel of 20 years ago.

So why remain stuck importing CSVs into Excel when you can use Power Query to import Parquet. Why remain stuck using VBA in Excel when you can use Python in Excel.

I do not think an Excel user can be convinced to move to things like Jupyter, R, databases, etc. since they won't even make the jump to Access but maybe they can be convinced to use modern features of Excel.


Sorry man, I can't think of a case where I'd import a CSV into excel, but have the skill level to use powerquery and import parquet.

Like, if you are going to use power query, why not just python? At least this way you arent going to get nailed into a legacy hellhole.


If the answer was always no, importing CSVs to Excel wouldn't be an expectation or widely used.


The answer is "always no" because the question is inherently underspecified, precisely because importing things into Excel is more complicated and context-dependent than it appears on the surface.


ASCII has had field and record separators since like, forever. Wish we had kept using those.


No you don’t. It’s a holdover from when files were on tapes. The logic is all inverted too. Record separators were at the beginning of a record. Group and unit separators could then be nested. You really needed a spec for whatever document you were trying to makt


It doesn't matter if the sentinel byte is after or before each record.

Having it before is nice for syncing byte streams.


It matters when you trying to roundtrip the data through a text editor because existing tools balk at a 300MB file with a single line.


You need a "spec" just the same for a CSV: does it have headers, is there a consistent column ordering, etc. Control characters work the exact same as commas and newlines, but you don't have to worry about escaping them because (unless your data is binary or recursive) the control characters won't show up in the data!


Do you have a reference to how this worked?


Does it have nesting operators? I want to embedd ASCII within my ASCII fields. So I can have a table within my table.


The POSIX spec defines portable and non-portable ASCII characters, prudently placing the separators in the non-portable set. In order to nest tables, base64 (or whatever portable encoding) code the table into the field. This works much better, easier and more error-free than any escaping strategy.

Regarding visibility in editors, if you are nesting tables I don't think you care too much about manual editing, but if you do, it is easy to setup vim/emacs/vscode to display the ASCII separators. I am being told even notepad++ can display those, there are no excuses.


It kind of does. See `man ascii`

* FS (0x1C) file separator

* GS (0x1D) group separator

* RS (0x1E) record separator

* US (0x1F) unit separator

I've never seen these in the wild though.


They‘re used a lot in barcodes, e.g. for delimiting the different fields of a driving license.


That sounds like the premise for an utterly fascinating deep dive.


It‘s quite the rabbit hole, I can assure you.


Not seeing them in the wild is good, it means they will work when you use them. The more they get used the more often you'll find they crop up in the text fields you're trying to separate and the whole csv escaping nightmare will continue.


If you've got delimited-text embedded inside your delimited-text, you've got a nightmare that escaping can't save you from.

(obligatory https://knowyourmeme.com/memes/xzibit-yo-dawg)


They're definitely used in ACARS messages that go to every commercial airplane cockpit...


A manager: I wish I could have a CSV inside my CSV.

Any sane person: NO!


base64 encoded fields it is.


That was the right answer. You are hired!


Well you might be wrong, but EDI in general and HL7 specifically allow 3 levels of "fields in fields in field".

As long as your parser copes, and as long as you have appropriate structures to import into, its no big deal.


So now when I'm exporting data I need to know what nesting level it's going to live at so I can generate the correct separators?

I really think that might be the worst idea I've heard for a while!


It's not toooo bad :) But it's very much a "thing" in the real word. It's called EDI, and it's been around for a long time.


About 10 years ago, I worked at a place where we were embedding both XML and JSON in CSV fields.

Then there are always the people who can't generate a valid CSV due to escaping issues...

Nothing is ever simple.


Been there, done the Interop.



Are there unicode characters specifically for delimiters?

If Excel had a standardised "Save as USV" option it would solve so many issues for me.

I get so many broken CSVs from third-parties


ASCII has characters for unit, record, group and file separator. And a some days ago there was a story here about using the unicode printable representation of these for an editor friendly format.

https://news.ycombinator.com/item?id=39679378


There are characters from ASCII for delimiting records, which are underused because they cause confusion about whether they should be represented as a state change like a backspace character, or as a glyph. See also: "nobody can agree on the line ending character sequence".

The USV proposal uses additional codepoints introduced in Unicode for the representation of the record delimiters, so they will always look and edit like character glyphs, and nobody is using them for some other purpose. The standardized look of these glyphs is unappealing, and they aren't easy to type, but it's fixable with a font and some editing functions.

Most of the issue hinges on Excel support.


> nobody is using them for some other purpose.

There's a lot of tooling which uses them for their intended purpose, which is to represent the C0 control characters in a font, so they can be printed when they appear in a document. Your editor is probably one of those.

Which is why I consider USV a terrible idea. If I see ␇ in a file, I don't want to be constantly wondering if it's "mention ␇" or "use ␇ to represent \x07". That's why the control pictures block exists: to provide pictographs of invisible control characters. Not to make a cutesy hack "look! it's a picture of a control character pretending to be what it isn't!!" format.


I agree about USV, it creates confusion where none needs to exist. For personal use, though, it is not that bad to receive a USV: it should be postmarked ".usv" and in any case if you suspect shenanigans you can `grep` for the offending (literally!) unicode characters and `tr` them into proper ASCII separators. Now, if there is nesting in the USV, I give up.

I share the lament: the whole table issue was solved before it became a problem. POSIX divides ASCII into portable and non-portable characters; only portable characters are allowed in the fields and separators are non-portable. If you need nesting, use a portable encoding of the inner table. This scheme repeats indefinitely without escaping hell or exceptions, preventing tons of errors and headache.

Visibility is such a bizarre complaint. Text editors already handle control characters: they handle tabs, they handle newlines, it is not a tremendous, earth-shattering feature request to make them handle separators gracefully.


I don't underatand why this is a question up for debate. You need eye tracking, so that there is a beep when you read the relevant part.


Hell, there's ASCII characters specifically for delimiters. 0x1C to 0x1F are respectively defined as file, group, record, and unit separators. Unicode naturally inherits them all.


Except nobody uses them. Another previous discussion: https://news.ycombinator.com/item?id=33935140


My significantly bigger beef would be all of the auto-formatting Excel does to mangle data. Excel loves to turn entries into dates.

Human genes had to be renamed so as to avoid this Excel features.


excel now has a prompt so you can tell it to not convert stuff automatically


Gasp. Big news. I do not recall ever seeing this, so I wonder if $JOB is running some hilariously outdated version for compatibility with a load bearing VBA script.


Automatic Data Conversion toggle was only added in the past ~year: https://insider.microsoft365.com/en-us/blog/control-data-con...


Yes, this recent discussion has lots of good info and links in the comments: https://news.ycombinator.com/item?id=39679378


> Are there unicode characters specifically for delimiters?

We could use the HL7 pipe ‘|’ and all enjoy that hell.


God, please no.

For those unfamiliar with the atrocity that is HL7v2, the format is essentially CSV, but with the record separator set to a lone CR, and the field separator usually set to |. Usually, because the format lets the consumer of the format redefine it, for whatever reasons. (The first use of the field separator it determines whatever character it will be. Thankfully, the first use is in a fixed spot, so it's determinable, but still. Oh, but we don't know the character encoding until like the 18th field in … and it doesn't necessarily have to be an ASCII superset. So I have no idea what an HL7v2 message in a non-ASCII superset even looks like, or how a parser is even supposed to reasonably parse such a thing. I presume attempt a decoding in all possible decodings, and then see which one matches the embedded character set, and pray nobody can create a polyglot?)

There's also further separators, delimiting within a field.

It also has its own escape sequences, to deal with the above.

… and it is what carries an unfortunate amount of medical data, and is generally how providers interoperate, despite the existence of more civilized standards like FHIR


I've unfortunately had to bless my brain with much more of this standard this week, for some reason.

Did I mention that subcomponents (if you look at it like a CSV, cells are further subdivided into components & subcomponents, so subcomponents are sort of where we hit "cell text", if you want to keep going with that broken analogy) — contain escape sequences, so that you can have things like the field separator. Normal stuff, so far. The escape sequences also include highlighting, binary blobs, and a subset of roff.


My worst nightmare was a semicolon-delimited file. Where one of the columns had hand-typed street names - without quotes.. so "WELLS" was often "WE;;S".

Since it was the only column like that, the # of columns to the left of the annoying column and the # on the right would always stay the same. So it was pretty easy to clean.


It’s been years since I last worked with HL7. Isn’t there also ^ and ~ to deal with?

Hell indeed.


Isn't that what HL7 stands for? Hell Layer 7 as in the seventh circle of hell.


Yes, there is. Multi-dimensional CSV?


There is the white space bs too. Sometimes it matters, sometimes it doesn’t. What type of white space is it?

Seriously rough.


>I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters

Commas can be typed by anyone on any keyboard and readable by anyone.

Special Unicode Characters(tm) can't be typed by anyone on any keyboard and readable by noone.

Convenience is a virtue.


I can't remember the last time I, or anyone I've ever worked with for that matter, ever typed up a CSV from scratch. The whole point of USV is that the delimiters can't normally be typed so you don't have to worry about escaping.

USV supports displayable delimiters (see https://github.com/SixArm/usv), so for the much more common case of editing an existing CSV in a text editor, you can just copy and paste.


Everyone of us was a beginner at some point. The first time we came across CSV format we likely typed it in notepad by hand. A lot of issues with CSVs are also sometimes troubleshooted by hand-- by manually fixing a quote or a comma.

There is value is the ability to do this level of editing and troubleshooting.


> The first time we came across CSV format we likely typed it in notepad by hand.

Again, I'm not saying CSVs aren't edited by hand in a text editor, I'm saying they aren't created from scratch in a text editor, even by beginners. USVs are easy to edit in a text editor, too, and I tried viewing and editing USVs with a couple different fonts and had no problems.


If the separators can't easily be typed, how do you add a new cell?


Nobody can type up a GIF image, or Word document in a Notepad, yet files of both those formats exist. The answer obviously is tooling. If a format with sane separators was common, so would editors that could edit that format be.


I was responding to the GP's:

USVs are easy to edit in a text editor

I don't see how that's the case.

If a format with sane separators was common, so would editors that could edit that format be

Sure, but that's a hypothetical future editor, not something that currently exists.

Edit to add: I also disagree with "sane" in that context. New separators won't solve anything. You'll always need escaping/encoding/encapsulation; get that right and everything else follows. JSON is comma-separated and does just fine.


Copy and paste.


No we didn't, we likely typed in Excel by double clicking on our first csv


I can’t speak for everyone, but I definitely didn’t use Excel.


I've valued the virtue of CSVs being readable by any text editor known to man, and I've occasionally edited them by hand. The pure simplicity of reading and typing commas trumps any value provided by more esoteric configurations.

As for escaping, that's for the subsequent programmers (which could also be me) to figure out. If it is me, I'll deal with it because it keeps things simple.


> I've occasionally edited them by hand.

Yeah, usually when the quoting was f'up.


> Special Unicode Characters(tm) can't be typed by anyone on any keyboard and readable by noone.

While I'm not a fan of USV, I do believe it is type-able on both macOS and Linux¹. The IME's character picker in both OSes contains all of the necessary characters, I think. (I use "␤" in commit messages, occasionally. That was a typed ␤, not copy/pasted from a Google query or such.)

It would be quite tedious, I do admit; reasonably typed by someone, probably not.

I don't normally type out CSVs by hand either, though.

(¹I don't know about Windows as I simply haven't cared about it in years. For all I know, they've grown an IME in the intervening decade.)


Even if the csv isn’t being typed out by hand, when importing into Excel and the delimiter needs to be manually entered, because it isn’t one of the common ones Excel has a radio button for… it is nice to be able to easily type it.


While I can see a convenience argument for the somewhat contrived case of typing an entire file out by hand, entering the character once for the import does not seem like a great bar.

> it is nice to be able to easily type it.

Again, that's where an IME is helpful; on the OSes I mention, it's "␞" is:

  macOS: ⌘+^+Space, "record", <arrows to select>, Enter
  Linux: Super+e, "record", Space, Space, <arrows>, Enter
The process is highly visual on both, so you're getting feedback about whether you're about to hit the right character, or not.

(And like, if you have the file, you can always ^C ^V the character, if say you don't know how to IME, or you don't know what the name of the char is, etc.…)


Would it be possible to just type the file using commas, semicolons, or pipes or something (whatever you happen to know you don’t have in your file) and then convert them using sed?


Yes, it would be possible. You'd have to make sure the character didn't appear / no escaping at all was present, which the data may or may not allow.

Might as well just get a dedicated CSV→USV converter, though.

(I have a variant of this problem in JSON→YAML … usually I just shove one into the other & pray, akin to your sed solution.)


Any character within reason can certainly be entered by way of Character Map in Windows or its equivalent in Linux or MacOS, but if you're arguing that then you don't understand the crux of my argument: Convenience is a virtue.

There is value in the delimiter simply being a key on practically any keyboard in existence. Anything that involves something more complicated than just pushing a single button on a keyboard (this includes IMEs) is a non-starter, because convenience is a virtue.


> Anything that involves something more complicated than just pushing a single button on a keyboard (this includes IMEs)

My point is that this is merely a more stringent argument; it's now "on a keyboard, and cannot involve dead keys, etc." … which now excludes classic CSV, too, which requires two keys to enter a double quote. (Shift+')

Again, it does require more keys, and it is certainly not convenient, but n keys to me is still n keys. The real question is why one isn't using a proper tool to enter the data, and is instead encoding it by hand, which, again, even for a classic CSV, is basically something I've never done. (… because why would one?)


The fact that it is a character on the keyboard is exactly the problem, too. Any character a user can easily enter will definitely end up mixed into your data somewhere.


The IANA standard for TSV already disallows tabs inside fields, so you can skip writing any quoting logic (in principle). The MIME type is `text/tab-separated-values`.

https://www.iana.org/assignments/media-types/text/tab-separa...


So true. Working with imports/exports in CSV from ERP software. One can't imagine how often "Oh, this import doesn't work. I'll just fix the CSV file" occurs. Try that with some compressed, "esoteric" file, or even CML and users will break it.

Besides all the downsides CSV has, as soon as it's not only machine-machine communication and a human is involved, CSV is just simole enough.


Check out Polars in python if you want some CSV performance lol. I recently got a 5 million row CSV from a 3rd party and I could manipulate columns (filtering, sorting, grouping) in actions that took less than a second. It's an incredible tool.


USV has a mountain of problems.

And really is in search of a problem to solve.


JSON objects as a CSV field has been mostly agreeable for my usage. It would be nice if some of the spreadsheet apps displayed the object tree.


ditto. if it fits in the ram, file types don't matter.


It seems like you missed the conclusion in the article. If users want CSV exports, let them have it.

If you have important data being shuffled around systems, pick something with a specification instead.


To me this criticism feels excessive. It feels like the author is describing their frustrations with internal usage of CSVs - there's no mention of customers and non-technical stakeholders at all. I think it goes without saying that Parquet files and other non-human-readable formats are a nonstarter when working with external stakeholders and the last paragraph makes that clear - if the end-user wants CSV, give them CSV.

I also think we shouldn't blindly dismiss the performance drawbacks of CSV when working with data pipelines. At even modest scales it becomes hard to work with very large CSVs because the data often doesn't fit into memory, a problem easily solved by Parquet and other formats assuming you only need a subset.


I deal with gig size csvs all the time and don’t have any performance issues. These aren’t huge files, but decent sized. And most are just a few megs and only thousands to millions of records.

Csv is not very performant, but it doesn’t matter for these use cases.

I’ll also add that I’m not working with the csvs, they are just I/o. So any memory issues are handled by the load process. I certainly don’t use csvs for my internal processes. Just for when someone sends me data or I have to send it back to them.

That being said my workstation is pretty big and can handle 10s of gigs of csv before I care. But that’s usually just for dev or debugging and anything that sticks around will be working with data in some proper store (usually parquet distributed across nodes).


That may be your experience, but certainly not a universal experience (and apparently not the author's, either). In my experience, it's pretty easy to have CSVs (or Parquet files, or whatever) that are tens or hundreds of GBs in size. The space savings from a more modern file format are significant, as is the convenience of being able to specify and download/open only a subset of rows or columns over the network. Most of us don't have workstations with 50GB of RAM, because it's far more cost-effective to use a Cloud VM if you only occasionally need that much memory.

That being said, the real point here is that folks blindly use CSVs for internal-facing processes even though there's no particular reason to, and they have plenty of drawbacks. If you're just building some kind of ETL pipeline why wouldn't you use Parquet? It isn't as if you're opening stuff in Excel.


The author is giving universal advice to all friends.

If the title was “friends in certain circumstances shouldn’t let friends in certain circumstances export to csv.”

Even a laptop with 8gb ram can open a gig csv.

Of course the internals of your etl will use some efficient data structure, but you’d still want to export as csv at some point to get data to other people. Or you want your friends to export csv to get data to you.


If I run a simulation workload it's pretty easy to generate gigabytes of data per second. CSV encoding adds a huge overhead space and time wise, so saving trajectories to disc for later analysis can easily become the bottleneck.

I have had many other situations where CSV was the bottleneck.

I still would default to CSV first in many situations because it's robust and easily inspected by hand.


> That being said my workstation is pretty big and can handle 10s of gigs of csv before I care.

How much RAM do you have? What's the ratio of [smallest CSV file which bottlenecks]/[your RAM]?


My dev workstation has 96gb. I don’t work with massive data files so I’ve never really hit my limit. I think the biggest raw data file I’ve opened was 10-20gb.


I very much agree with this. For an integration where you have control over both ends of the pipeline, CSV is not optimal unless there's existing work to build on, and even then it's a legacy choice.

Parquet and Avro are widely supported in backend languages and also in data analysis. I don't think the article is talking about exported-like-a-jpeg, but instead exported-like-a-daily-report-run: the data scientist doing the exporting is probably using R or Pandas instead of Excel, and can reasonably be expected to read https://arrow.apache.org/docs/r/reference/read_parquet.html.


btw, xsv has solved most of my problems dealing with 'large' 40GB csv files


xsv? I never heard of it. This one? https://github.com/BurntSushi/xsv

If yes, looks very cool. Plus, bonus HN/Internet points for being written in Rust!


yep .. his utils are most excellent.


its parser is buggy! https://github.com/BurntSushi/xsv/issues/337

(I ran into this issue myself)


I just responded to that. It isn't the parser that's a buggy. The parser handles the quotes just fine. If it didn't, that would be a serious bug in the `csv` crate that oodles of users would run into all the time. There would be forks over it if it had persisted for that long.

The problem is that `xsv table` doesn't print the parsed contents. It just prints CSV data, but with tabs, and then those tabs are expanded to spaces for alignment. Arguably it ought to print the parsed contents, i.e., with quotes unescaped.

It almost looks like it's doing that because the quotes are removed in one case, but that's only because the CSV writer knows when it doesn't need to write quotes.


Ok this might sound stupid, and a bit unrelated, but you make so many great tools that I can't help but ask. How do you start planning and creating for a tool that needs to "follow standards"(in this case I know CSV is under specified but still!), is it by iteration or do you try to set and build a baseline for all the features a certain tool needs? Or do you just try to go for modularity from the get go even if the problem space is "smaller" for stuff like csv for example.


I suppose https://old.reddit.com/r/burntsushi/ might be a good place for questions like this.

I don't really have a simple answer unfortunately. Part of it is just following my nose with respect to what I'm interested in. So there's an element of intrinsic motivation. The other part is trying to put myself in the shoes of users to understand what they want/need. I typically do a lot of background research and reading to try and understand what others have done before me and what the pain points are. And iteration plays a role too. The `csv` crate went through a lot of iteration for example.

I think that's about it. It's hard to answer this question in a fully general way unfortunately. But if you want to get into it, maybe you can be the first person who opens a thread on r/burntsushi haha.


Why is this getting downvoted? They're right that the criticism is pretty excessive:

  "Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation."
Hm.


I got a little snarky but I think the analogy holds.

Esperanto is a superior language to English. And English has many flaws.

Theoretically it would be better to have all pilots and airports learn an efficient language.

But it would be stupid and immature to seriously write a blog post about that, especially without talking about all the flaws in that plan.


> Esperanto is a superior language to English.

Not really. You can say it's more regular but that's because it sees barely any actual use; if it ever gained popularity it wouldn't stay regular. (And given that pilots speak in set phrases anyway, irregularity isn't really an issue). It's not a great language by any stretch, it's an awkward mismash of four european languages; sure it sounds kind of nice in an Italianate way, but if that's what you want then why not just speak Italian?


> if it ever gained popularity it wouldn't stay regular

This. Utility and purity always pull in opposite directions.

I see those boutique little (programming) languages written by amateur language designers with exotic type systems or “everything is an X” philosophies, and my reaction is to assume that they are useless for anything past toys and experiments.

I know useful language features have been born in that world and then eventually bolted onto mutt languages like Java and Python, but that suits me just fine.


It would stay regular if there was a strict governing body for it that wasn't a Webster-style "whatever people are speaking is the new definition of correct".

English really is a disaster of a language. There was a(nother) great XKCD about it just a few days ago. https://xkcd.com/2907/


> if there was a strict governing body for it that wasn't a Webster-style "whatever people are speaking is the new definition of correct".

There is no way it can work.

People don't care about governing bodies when they speak a language.


They kind of do. In my language there exists a central "governing" body that decides what is correct, and some "incorrect" regionalisms are disappearing because of it.


When that happens, it’s likely more about politics and social status than the governing body.

Ie, the governing body decrees that the regionalisms from the dominant region is the definitive version of the language. But it might be considered cool to speak that way even without a governing body.


Are you referring to french? Because, if anything, french in France has an insane quantity of slangs and has an extremely emergent vocabulary. Much more so than any English speaking country I can think of. Quebec isn't really influenced by the Académie française yet has a much more "correct" usage of the language generally speaking.

Maybe it's a totally different language but still it goes to show that even a very prestigious central authority doesn't make a language better or less prone to diverge. Regardless of the reason, French is evolving much more quickly than English.


But maybe that's beside the point. If someone wants to "learn French" they can learn by the official rules and communicate with other french-speaking people regardless of how many slang variants exist in France. They can also probably watch French television and understand it.

The point of esperonto was to make it easier to learn. French is regular, but extremely complicated. English is complicated and has a million special cases. Both languages are hard enough to master that society starts to judge a person's intelligence by how well they know the rules and special cases.


Correction it's a disaster of at least 4 languages and this is probably why English is so hard to dethrone as it have no strict ownership so everyone is kind of equal in speaking it incorrectly.

Sometimes lack of rigidity is actually an feature that allows for things to sort of work that would be politically impossible if thinks had to be specified formally before being used.


I'm pretty sure the reason English is hard to dethrone is because Britain ~helped~ forced the various colonies to join world commerce using English, so they started teaching it to entire generations as the national second-language, and then because the USA dominated world commerce after that in a sort of "we'll let you in on the game if you speak our language and use our money" sort of way.


> Esperanto is a superior language to English. And English has many flaws.

At least one one perspective, English is superior. That perspective is that you can actually use it in almost any modern situation because it has been tried and tested globally.


This is the point of my analogy.

English:Esperanto::csv:parquet

(Although I think parquet is much more useful than Esperanto and may eventually end up dethroning csv)


Agree.

Not saying csv doesn’t have its issues, but I don’t think the author made a convincing argument.

A lot of the issues the author brought up didn’t sound that bad and/or it sounds like he never looked at the source data first.

If you’re doing work with large datasets, I think it’s a good practice to at least go and look at the source data briefly to see what to expect.

This will give you a good idea of the format it outputs, data types, some domain context, etc. or some combination thereof and I don’t think it even takes that long.

Also, it reminds me of the arguments against excel in a way. Most people know what a csv is, more or less how to open it, and don’t need too much context when discussing the file. Someone will quickly understand if you tell them the file isn’t delimited properly right away. These are pros that shouldn’t be taken for granted.

Again, I’m not saying csv doesnt have issues or that there aren’t better alternatives, simply that I didn’t find this particular argument convincing.


IME most people don't know that using Excel to open and save a csv will silently mangle data. In our application leading zeros are significant, so we constantly get screwed by people trying to do quick manual edits and breaking the data. If we're lucky it breaks so badly the import fails. It's worse when the mangling results in structurally valid but wrong data.


I think what you’re saying is accurate, but it’s also important to be practical about stuff.

These are pretty well know excel limitations by now.

And really, anyone using excel who is somehow not aware of that limitation is probably not someone yet experienced enough to be working on a larger and/or mission critical dataset to begin with.

Are there exceptions? Sure. You might be tempted to cite the example of the incident where this happened to some biologists not too long ago, but mistakes happen. I’ve seen people make mistakes building android or iPhone using the right (TM) tools.

What is the exact number of mistakes where you make the decision to jump to a new format?

I’m not sure. This does happen eventually, but the author didn’t make a strong case here imo.


But the point is that you don't have to look at the source data if you have an actual specification and defined format, right?


> all these sources aren’t able to agree on some successful format.

But the same is true for csv, and they are not readable by everyone since you don't always know how to read them, there is not enough info for that

Also it's not a good reflection on "deep experience" if it leads to reflexive defense of common stupid things people do with wrong analogies (e.g, flac is less efficient, so more like csv)


In my experience csv has the fewest problems. Not that it has zero problems.


For me, the giveaway was:

"You give up human readable files,..."

I was genuinely interested in some alternative suggestions - but the human readableness of csv is what makes it so sticky imo.


My entire exoerience with software development has been me bellyaching about how stupidly things are setup, why dont we do it this way instead etc... only to actually set about working on fixing these things and reqlizing its either way harder than I thought, it makes more sense than I thought, or it just plumb isnt worth the effort.

The evervescent suggestions of a brighter more logical, even obvious, solutions, is often a clear indicator of domain inexperience or ignorance.


I worked plenty enough with 'diverse data pipelines' and most of them were shit due to other companies just not knowing how to work properly.

CSV created tons of issues regarding encoding, value separation etc.

I started talking to our customers and were able to define interfaces with better and aligned format. json made my life easier.


So...

In some senses, I think internet culture (maybe modern intellectual culture generally) gets stuck in these repetitive conversations.

Reprosecuting without seemingly knowing about all the previous times the conversation has been had.


And it's surprisingly hard for etl departments to export to csv correctly. I mean, if they can't do csv they can't do anything more complicated for sure.


This article seems written by someone who never had to work with diverse data pipelines

I think that's a little unfair, it sounds like the author does have a decent amount of experience working with real-world CSV files:

I remember spending hours trying to identify an issue that caused columns to "shift" around 80% into a 40GB CSV file, and let me tell you, that just isn't fun.


Yup, csv is always the best fallback, imo. It's: easily generated, easily parsed, human readable/editable, compact, portable, list goes on.


> Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.

If only CSV were CSV, as opposed to some form that's 80-90% CSV by line count with enough oddities to really make the parser ugly and hard to read.

See, the sweet spot isn't something completely unstructured, because then you feel justified in throwing up (your hands) and declaring defeat. The sweet spot is a file that's sufficiently close to being structured you can almost parse it nicely, but has enough bad lines you can't go in and fix them all by hand in a reasonable timeframe, and you can't tell upstream to get their shit in order because it's only a few lines.


There’s definitely hair to deal with and it’s a little messy, but it’s never a blocker.

But I’d say the error rate is actually very low, maybe .1-1% and nowhere near 10-20% of data being messed up.


> But I’d say the error rate is actually very low, maybe .1-1% and nowhere near 10-20% of data being messed up.

The thing with CSV-related issues is it's usually not a fixed percentage but instead depends on the data.

I work in the VoIP industry so I deal with the FreePBX Asterisk GUI quite often, and it uses CSV as its bulk import/export format. This mostly makes sense as the data is almost entirely (with one notable exception) simple tables that fit nicely in to rows and columns. The issue I run in to most commonly with this is that it doesn't quote numerical fields, and as a result the fields for extension numbers and voicemail PINs can be problematic when they contain one or more leading zeroes. All of the major spreadsheet software I've used defaults to dropping leading zeroes from columns they've decided contain numerical values, and this results in broken data in these cases. It's of course relatively rare for users to choose a voicemail PIN starting with zero and even more rare for extensions to be set up with a leading zero, but both happen regularly enough that I need to remember to manually configure those columns as "Text" when opening an extension export CSV.

Either way, how often the problem occurs depends entirely on the data being sent through this pipeline. Most sites will never see the problem on the extension column, but one of my sites where the company liked a user's extension to be the last four of their DID when they were initially set up 20 years ago has a dozen of them in a row.


Depends on your tools, I suppose. I'd just like to share this:

https://metacpan.org/pod/Data::TableReader::Decoder::IdiotCS...


Did you actually read the conclusion at the end of the article?

"Of course, we can't conclude that you should never export to CSV. If your users are just going to try to find the quickest way to turn your data into CSV anyway, there's no reason why you shouldn't deliver that. But it's a super fragile file format to use for anything serious like data integration between systems, so stick with something that at the very least has a schema and is more efficient to work with."


It's a premature optimization issue. If you don't have special requirements like IO throughput, or mission critical data accuracy guarantees, be biased towards picking the format that anyone can easily open in a spreadsheet.


You can open out easily, but just as easily it can be wrong. So with this bias you'd still not export csv, you'd use xls


"Of course there are better formats, but all these sources aren’t able to agree on some successful format."

It's the same with csv. They come in all kinds of formats because nobody agreed on the standard. Comma separated, semicolon separated, pipe separated, escaped, not escaped.

Everytime I have to deal with csv I first have to figure out how to parse it in code.

So I think the author is right, we must agree on a better format because that is what friends do.

You are also right because it's an illusion to think that this is going to change anythime soon. But who knows..


Every integration I’ve ever worked on has started off with high ideas of APIs and nice data standards. And has eventually devolved into “can we just put a CSV file on an FTP site…”. With the inevitable, “it’s not really CSV…”


... And what's more, you'll be an Engineer my son.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: