Furthermore, no thread on CSV files can be complete without mentioning this infamous bit of fact-trolling: the ASCII standard has had unit- and record-delimiters baked into it from the beginning.
They don’t really get used IMO because if you’re storing ASCII or Unicode text, you have to be prepared for those characters to be in your data. And if you’re storing binary data, delimiters don’t really cut it in the first place.
No it's fine. Just base64 encode the binary data. It's common and streams, or you can capture and convert the entire field at once if you need to seek around in the binary data.
That only works if the thing consuming your CSV can be configured to decode whatever encoding you picked. Uploading CSVs of data to various SQL servers is an example case where that is often not possible.
I didn't mean for CSV. There's plenty of usecases for tabular data that aren't hand coded that this could be used for. Plenty of times I've been just wanting to STGDD and not having to mess around with a serialization library, or have to select delimeters, encapsulators, do escaping etc.
You can type it at a terminal the same way you can write other low-ascii codes. For example record separator, 30, is ctrl-^ - you can type it in the terminal by the usual literal control char way, ctrl-v[1].
So for example at a Python prompt:
>>> ord('^^') # typed ctrl-v ctrl-^ here
30
(the low unprintable ascii codes correspond to certain ctrl combinations because ctrl-x is just the ascii code of x with the high bits masked off (i.e. & 0x1f) )
[1] custimizable in your terminal settings, see stty lnext ("literal next")
Can't be found, but you can enter them by hand by holding down alt while entering the ASCII code on the numeric keypad. I wonder if they'll survive posting a comment:
28∟
29↔
30▲
31▼
Edit: looks like they do survive. Hmmm... Now I wonder about extended ASCII.
└┐│└└╚╦╝
Also, you can leverage the ISO/IEC 2022 extension and represent them with Caret Notation. Note: recommend an escaping mechanism depending on data. REF: https://en.wikipedia.org/wiki/C0_and_C1_control_codes#C0_(AS...
An editor such as vim or pager like 'less' will display them. You can also 'cat -v' a file.
I use TSV for that reason. Most of the time you don't need to escape anything. And from code you need to escape/unescape only 4 chars. \t \n \NULL and \\.
In a terminal, Record Separator is CTRL-^, Unit Separator is CTRL-_. Vi[m] will accept the former literally, and will accept the latter if escaped with a CTRL-V.
Sqlite virtual tables. You can literally query any data source. All you need is a module implementation of interface exposed by sqlite for the data source of interest. I once wrote a module that could query protocol buffers encoded data
Came here to post the same. This can be used in very interesting ways, either for quick ad-hoc queries or for import/export functionality (ETL), where the core of your data is in InnoDB but you keep a CSV version for one of the phases of your processing, to interface with other tools.
This may looks like a dirty hack compared to a clean API, but if you have performance considerations or need to interface with some legacy thing that happens to understand CSV, this can be a good choice.
I've never heard of this before, but it reminded me of this other command line tool for wrangling csv files in a cli. This tool makes use of SQL syntax much more than xsv, so there isn't 100% overlap here.
You should definitely consider using xsv under the hood. It’s dramatically faster than any other tool of its kind that I’ve tried, with low-level implementation in Rust.
Another trick along these lines is to cut out the middleman (middleprogram? middleware?) and use SQLite to do it:
$ sqlite3
sqlite3> .mode csv
sqlite3> .import foo.csv foo
sqlite3> SELECT * FROM foo WHERE bar = 'baz'; -- you get the gist...
(a bunch of rows)
Q seems to be much easier to use (and certainly easier to remember), but I've always found it handy to have the full power of SQL at my fingertips when needing to do a bunch of CSV manipulations.
All of your “horrors” seem...correct? Its comma delimited, so anything that is between two commas should be parsed without issue; if it’s a string with a comma in it, and unquoted, you simply have a broken csv file. If its quoted, than anything until the next (unescaped) quote is fine, including commas
Unless you’re trying to parse csv files with regexes, none of those should be difficult, or even unexpected, to handls with a PEG parser, or any equivalent device
Ofc if you’re accepting ambiguity then its just arbitrary how you handle it, but none of your examples afaict present any ambiguity (I’m assuming strings are either quoted or unquoted, with the former primarily allowing commas/newlines in strings; escaping exists as well; comma delimited columns, newline delimited rows)
If you need to read to import CSV from someone else, there are tons of ambiguities. Do you interpret an empty field as an empty string or a NULL? If you've treated a column of unquoted digits as numbers so far, do you parse the first row with a non-number in that column as a NaN, NULL or string? If string, do you reinterpret all the previous column values as strings?
Many people are not in the position to just return the file to the client/boss and tell them they have a "broken csv file". (They'll tell you they saved it in Excel and it reads back fine, so the problem must be on your end. E.g.: https://stackoverflow.com/questions/43273976/escaping-quotes...)
Yes, it would be valid CSV. I suppose my point is that naive attempts to roll-your-own CSV parsers tend to fail on the points I listed. Hopefully Q does not do that.
There is a lot of inconsistency out there. I have seen csv files saved in Excel not be import-able by Access because the latter doesn't handle breaks in fields correctly. I've seen csvs saved from various systems such as sql mngmnt studio grid view and wufoo exports not generate csv correctly. There are many lazy attempts at csv generators out there that just throw breaks between records and commas between fields and call it a day.
And even if they do wrap all fields in double quotes it is very common to forget to escape double quotes in fields, then it depends on the parser as to whether it can determine the proper structure of the record.
More interesting: any kind of delimiter, including chars from utf8 and windows-1252, and you need to detect encoding too. And CSV embedded in CSV, a result of flattening an XML source. And fixed width files, not CSV but where you see CSV you may need to support. And let's not get into date parsing or other typed data, and type inference over sample files.
Any kind of input/output delimiter is supported (-d <delim> and -D <delim>), and also multiple encodings (-e <encoding>). Also, q performs automatic type inference over the actual data.
Encoding autodetection and fixed width files are not supported though.
The company I work for also does delimiter autodetection, quote character inference (from a limited set), and encoding inference (which is mostly limited to utf8 / windows-1252 / iso-8859-15, but it can't reliably differentiate latter two).
Without diving into the source code I can only say Q pops up a couple times a year in either as posts or in cli recommendation threads so I suspect it's at least reasonably robust.
Excel has a lot of this stuff builtin as well in the PowerQuery editor [0] which supports CSV, JSON and XML data sources. Albeit not SQL, it allows you to do almost everything SQL can do, but in a GUI.
After you're done preparing your data in PowerQuery you can run PowerPivot on it for aggregations.
Discovered csvkit last week and it was a lifesaver. Had to remove a bunch of columns from a large CSV file with many columns - csvcut was way easier than trying to do it with awk or cut.
We've had this for ages now: the F# type provider gives strongly types access to CSV files, and the resulting object heiarchies are LINQ compatible. There's even support for applying units of measure.
What? kdb, K and Q are part of a renowned array processing toolkit/environment widely used by data scientists for decades.
Calling this command-line tool Q, which happens to be used for the same purpose as the Q programming language (i.e. querying data) begs for confusion and misunderstanding...
I think tools like this don’t exist much because it typically makes more sense to just put it into a database or at least SQLite ... then do whatever you want from there.
Meanwhile for all your streaming, filtering and aggregating need there is awk.
I’ve never been board enough to write my own little sql library for awk, but I’d be surprised if it doesn’t exist.
That's exactly what q (the linked tool) and almost all other tools like it do - it loads into an sqlite database and provides some handy wrappers, that's it.
This is effectively a set of wrappers around awk, sort, join, etc that knows about field names, and lets you use the vanilla shell tools in a much nicer way. It's exactly what you want for the 99% use case. If your problem is complicated enough that you feel like sql is what you want, maybe leaving the shell is ok at that point.
Many of the applications at my workplace use flat text files in CSV format for logs and configuration. For new development I've been using Sqlite to replace some of these usages. Sometimes when I need to analyze legacy log files I import the CSV data into Sqlite tables. After doing this a few times I hit upon the idea of why not skip the import (which balloons my otherwise-small db files) and write an Sqlite plugin that allows treating the flat CSV file as a virtual table? It's neat to see there's some existing work here!
If you need to work with CSV or Excel or tab files from the command line I suggest having a look at alasql. Can be installed via npm and gives you a good list of export options.
> Have you ever stared at a text file on the screen, hoping it would have been a database so you could ask anything you want about it?
Not really. I could see this being helpful for debugging, but at that point you can get by with some simple bash string operations. Doing that same operation w/SQL seems like overkill.
Maybe my bash just isn't good enough anymore, but I do this all the time. In fact, I was just doing some work this morning where I needed to load a CSV into SQLite for analysis.
Since I still had the file, I ran some queries on the same file via Q to test it out. And it works great! (With some handholding to get quoting and delimiters right). The major downside was it's pretty slow, presumably because it reloads the file into a SQLite database each time you run a command. So, I'll probably stick with loading CSVs into SQLite myself, but I could see this being a useful tool for running one-off analysis on data from stdin.
The devil you know; I find myself with the opposite problem on occasion. "If only I could just grep and awk out what I wanted instead of using some contrived query and arcane data connectors"
https://twitter.com/neilkod/status/914217352564137984
Furthermore, no thread on CSV files can be complete without mentioning this infamous bit of fact-trolling: the ASCII standard has had unit- and record-delimiters baked into it from the beginning.
https://ronaldduncan.wordpress.com/2009/10/31/text-file-form...