Hacker News new | past | comments | ask | show | jobs | submit login
Simdjson – Parsing Gigabytes of JSON per Second (github.com/lemire)
598 points by cmsimike on Feb 21, 2019 | hide | past | favorite | 196 comments



This is very cool. Meanwhile, in the xi-editor project, we're struggling with the fact that Swift JSON parsing is very slow. My benchmarking clocked in at 0.00089GB/s for Swift 4, and things don't seem to have improved much with Swift 5. I'm encouraging people on that issue to do a blog post.

[1]: https://github.com/xi-editor/xi-mac/issues/102


I wrote my own Swift JSON parser quite a while ago, https://github.com/postmates/PMJSON. In my limited benchmarking it parses slower than Foundation's JSONSerialization (by a factor of 2–2.5 IIRC) but encodes faster, and my impression was most of the time was spent constructing Dictionaries, but I didn't do too much performance work on it. It might be interesting to have someone else take a crack at improving the performance.

That said, it also includes an event-based parser (called JSONDecoder), so if you want to handle events in order to decode into your own data structure and skip the intermediate JSON data structure, you might be able to get faster than JSONSerialization that way.


Why does Xi use JSON in the first place? It would be easier and faster to use a binary format, e.g. Protobufs, Flatbuffers or if the semantics of JSON is needed: CBOR.


From “Design Decisions”[1]:

> JSON. The protocol for front-end / back-end communication, as well as between the back-end and plug-ins, is based on simple JSON messages. I considered binary formats, but the actual improvement in performance would be completely in the noise. Using JSON considerably lowers friction for developing plug-ins, as it’s available out of the box for most modern languages, and there are plenty of the libraries available for the other ones.

1: https://github.com/xi-editor/xi-editor/blob/master/README.md...


So is it too slow or not?


We actually do get 60fps, but JSON parsing on the Swift side takes more than its share of total CPU load, affecting power consumption among other things. So (partly to address the trolls elsewhere in the thread), the choice of JSON does not preclude fast implementation (as the existence of simdjson proves), but it does make it dependent on the language having a performant JSON implementation. I made the assumption that this would be the case, and for Swift it isn't.


At some point though, isn't it maybe easier just to use an inherently more efficient format than trying to rely on clever implementations to save you?

I totally get json for public internet services where you want to have lots of consumers and using a more efficient format would be significant friction, but writing an editor frontend is a very large endeavor -- it seems like the extra work of adopting something more efficient than json (like flatbuffers or whatever) would really be in the noise.


It's a complicated tradeoff. It's not just performance, the main thing is clear code. Another factor was support across a wide variety of languages, which was thinner for things like flatbuffers at the time we adopted JSON. Also, "clever implementations" like simdjson don't have a high cost, if they're nice open source libraries.


The problem with clever implementations isn't that they can't be reused or that they have abnormally high cost for end-users (though this is sometimes the case). It's that they inherently require more work to maintain, author, and debug over time. When you're talking about a cross language protocol that will have myriads of available implementations (each with different constraints), it's not unreasonble to take a look at how much work a third party must engage in to get such a "clever" implementation (or, in other words, "how many people could reimplement simdjson?") And if those existing clever implementations aren't available (or viable) for some use case, then you're out of luck and start at square one. This happens more often than you think.

In this case there's a lot of work already put into fast JSON parsers, but in general JSON is not a very friendly format to work with or write efficient, generalized implementations of. Maybe it's not worth switching to something else. I'm not saying you should, it seems like a fine choice to me. But clever implementations don't come free and representation choice has a big impact on how "clever" you need to be.


Re clear code, to my mind it comes out pretty much the same regardless of serialization† format: best approach is to have protocol be written down in some real language (e.g. flatbufs schema or annotated rust structs or whatever), and codegen for target languages.

My guess is it's easier to write an efficient flatbuffers (or similar) serializer+deserializer than an efficient json serializer+deserializer. And the top-end of performance definitely higher.

So if you're already reaching the point of needing to write your own json deserializers...

(† Unless you're talking about some hand-written bespoke binary format, but that would almost certainly be crazy.)


One of the other performant libraries in the comparison section of simdjson has a Swift wrapper: https://github.com/chadaustin/sajson. Haven't tried it, but one option would be to bring that up to date. Another option, now that Swift 5 strings use utf8 as a native encoding, it may be possible to write a fast Json parser in native Swift. Likely someone already has or is doing that.


It's not a binary yes/no question.

Given equally-high quality JSON and binary serdes, JSON is sufficiently fast. Raphlinus is saying that Swift's built-in deserialiser is obnoxiously slow.


Any reason not to just use a third party Swift JSON library?


Xi has multiple components written in multiple languages. In the rust core, json de/serialization is not a problem, but swift is lacking a similar high-performance library.


I'm going off topic at this point but I'd think for a native app the main advantages of a binary format would be the static typing and code generation that come from using an IDL.


Rust (the language the Xi core is developed in) has static typing for JSON, as well as other serialization formats: https://github.com/serde-rs/json


I'm familiar with serde. It's an incredible project, but I wouldn't quite call it "static typing for JSON". You still have to unwrap the parse at some point. However, I will concede the point that if you have Rust on both sides then you'll get most of the benefits.


You can have a binary format that's self-describing. It's important to understand all of the independent parts that go into a format.


[flagged]


You misread the rationale. He is arguing that, with all conditions same, the difference between binary formats and JSON would be in the noise. It is often the case that the object construction is more costly than the JSON parsing, and you can't fix that with binary formats.

As a minimal and extremely non-scientific benchmark, I've constructed a simple fixed data structure that encodes to JSON (using Python `json` module) and simple binary formats (that would be an ideal case for Python `struct` module). Decoding the same simple value 1,000,000 times in CPython 3.6.4 took...

    Format  Size  Iters.     Speed
    ------  ----  ---------  ----------
    JSON      28    205,000   5.75 MB/s
    Struct     6  2,400,000  14.4  MB/s
Of course YMMV, but even the `struct` module was only 2--12 times (depending on what you care about) faster than the `json` module in this particular case. And this is really minimal, you need an (slow) interpreted code for more complex binary formats. Right, you can use PyPy for the JIT compilation or binary modules for sidestepping the interpreter overhead! The point is that, it of course matters, but not quite drastic improvements you'd imagine.


> "It is often the case that the object construction is more costly than the JSON parsing, and you can't fix that with binary formats."

What.

  typedef struct _some_struct_t
  {
      unsigned long some_long;
      unsigned long some_other_long;
  } some_struct_t;
  
  ...

  {
      some_struct_t foo = { 0 };

      foo.some_long = 1;
      foo.some_other_long = 2;
  }
Is somehow comparable to using JSON?


C is one of extreme cases; that's why Cap'n'proto works pretty well in C++ and its cousins for example (it amortizes the decoding cost to accessors, and accessors are really cheap in those languages). There are many languages and implementations where decoding cost is not as significant.


> "C is one of extreme cases"

I would say it's the other way around.

We've had the knowledge and tools to build performant, scalable and highly maintainable systems for a while now. The learning curve is there, but that's part of the trade. We've been too occupied with reducing the entry barrier though - the end result being people shoving JSON into places it should have never been in.

JSON can absolutely be a part of a text editor's architecture - with areas that don't necessarily require near real time performance (think configuration, metrics). Anything beyond that - C structs would be a great way to go, and I don't see why there's a debate here.


Because the idea of Xi is that it can support different frontends for different platforms, and that probably wouldn't work out to well if they all had to be in C.


The Xi backend is already written in Rust, a relatively low-level language with a somewhat C-like FFI/ABI. The choice to use JSON in time-critical code, when more performant alternatives are available, seems to me like a mistake.


The whole point is JSON is not in time-critical path.


This is a super flawed argument. Clearly flat buffers and even protocol buffers are faster to serialize and deserialize than json, regardless of what you benchmark in python.


And for the amount of messages that are being sent, the speed difference is irrelevant.

This is the same conclusion sqlite developers came to. They tested turning JSON column types to binary and the speed difference was not large enough to warrant maintaining that code so they kept the data in JSON.


If the speed different is irrelevant, why are they struggling with it?


Because most implementations are reasonably efficient. Swift default one is apparently not.


Python might be the one language that isn't true for. In my Python experience, the Google protobuf library is frustratingly slower than the built-in json module for any data structures I've cared about, which is why things like pyrobuf exist to solve that performance problem: https://github.com/appnexus/pyrobuf


So you claim that decoding flat buffers and protobuf is faster than decoding with `struct`? I'm pretty much aware of various flaws and even stated some, but I barely buy that claim without a separate benchmark (which I really welcome by the way).

At least I fully understand what the `struct` module actually does under the hood---it sorta compiles to a list of fields and "interprets" the dead simple VM in C. Oh, of course I've used the precompiled `struct.Struct` for that reason (but it was only 20% faster). Anyway, this arrangement is typical for most schematic serialization formats in any language: a bunch of function calls for gluing the desired format, plus a set of well-optimized core functions (not necessarily written in C :-). Henceforth my justification that this is close to the "bare-bones" serialization format.


> with all conditions same, the difference between binary formats and JSON would be in the noise.

But, seemingly, in this case the conditions aren't the same.


Are you using the slowness of Python's `struct` module to prove that binary formats in fast languages are slow?

I've benchmarked Capnp Vs JSON for Modern C++ in C++, and Capnp was something like 8 times faster.

If you're struggling with JSON performance how is moving to a binary format like Capnp (or Flatbuffers etc.) not a better solution?


It seems they're getting parsing times 1,000x slower than any other parser, 10,000x slower than simdjson. The complaint is understandable, but ironic :)


These numbers are not quite right for a variety of reasons (performance measurement methodology is hard), but to do something more of an apples-to-apples comparison, it's about 50x slower than serde in Rust. That's still a lot, obviously.


But... how else are the people that have never seen a byte array or had to flip endianness will be able to write plugins for my text editor?


Because JSON encoding/decoding was not found to be a typical performance bottleneck, and because JSON is supported in virtually every programming language (Xi allows you to write frontends in pretty much any language you want).


After spending most of a year doing deep surgery on systems that used CBOR extensively, I can report that the common CBOR parsers are not faster than common JSON parsers; surprisingly, they are actually slower. CBOR is also not easier; it's much less widely supported, and you need a separate debugging representation. It does have three real advantages over JSON: it supports binary strings, it's a monument to Carsten Bormann's ego, and data encoded in CBOR takes slightly fewer bytes than the same data encoded in JSON. (The second is only an advantage if you're Carsten Bormann.)


There are a few more advantages to CBOR:

1) there's a distinction between integers and floating point values;

2) you can semantically tag values (yes, this is a text string, but treat it as a date; this is a binary string, but treat it as a big number; etc.);

3) you can have maps with non-text keys.

I'm not sure what Carsten Bormann's ego has to do with CBOR, but I found RFC-7049 one of the better written specs, with plenty of encoding examples. It made it real easy to write a encoder/decoder [1] and use the examples as test cases.

[1] https://github.com/spc476/CBOR


All three of those could be advantages under some circumstances, but I've more often found them to be disadvantages. What do you do with maps with non-text keys when you're deserializing in JS or Perl? For that matter, what do you do in Python when the key of a map is a map? When you have a date, do you decode it as a datetime object, as a text string, or as some kind of wrapper object that gives you both alternatives?

I agree that having lots of examples in the spec is good.


> What do you do with maps with non-text keys when you're deserializing in JS or Perl?

Um, use another language? I use Lua, which can deal with non-text keys. As for decoding dates (if they're semantically tagged, which you can with CBOR) I convert it to a datetime object, on the grounds that if I care about tagged dates, I'm going to be using them in some capacity.

But that's not to say you have to use the flexibility of CBOR. But for me, having distinct integer and floating point values, plus distinct text and binary data, is enough of a win to use it over JSON.


While theoretically true, in practice the actual character parsing tends to a small to negligible part of the overall time. Which leads to the measurable fact that on macOS/iOS, the JSON serialization stuff is actually one of their fastest, faster than their binary stuff.


I ran one of the Codable benchmarks in instruments, and here's what the top functions were:

  19.98 s   swift_getGenericMetadata
  19.15 s   newJSONString
  16.17 s   objc_msgSend
  15.33 s   _swift_release_(swift::HeapObject*)
  14.45 s   tiny_malloc_should_clear
  12.81 s   _swift_retain_(swift::HeapObject*)
  11.28 s   searchInConformanceCache(swift::TargetMetadata<swift::InProcess> const*, swift::TargetProtocolDescriptor<swift::InProcess> const*)
  10.46 s   swift_dynamicCastImpl(swift::OpaqueValue*, swift::OpaqueValue*, swift::TargetMetadata<swift::InProcess> const*, swift::TargetMetadata<swift::InProcess> const*, swift::DynamicCastFlags)
So it looks like a lot of the time is going into memory management or the Swift runtime performing type checking.


Yeah, I've done some analysis, it's creating a ton of objects to conform to the Codable protocol, and a lot of those objects are for codingPath, which is updated for basically every node in the tree. It's not a mystery, we just don't know the best way to fix it.


Is there a reason you need to use Codable? Sorry if this sounds uninformed, I haven't taken that much time to look at what you're doing exactly (I just ran https://github.com/jeremywiebe/json-performance).


That's one of the things we're considering. But it is by far the most idiomatic way to do things in Swift. One of the alternatives we're considering is implementing the line cache (including the update protocol) in Rust, which would be a huge performance jump.


No, I don’t think the project needs to use Codable. The point of that benchmark was to evaluate Codable’s performance under Swift 5. It was posed that performance was much improved. The benchmark points out that it has a little bit but not significantly.

Codable is desirable because it encodes/decides directly to strifes vs manually picking fields out of dicts.


Can you see any differences with different levels of optimization? I recall a presentation at some point where the old obj-C style compiled code did a lot of checks before and after calling a method ("does this object listen to this message?"), while with an optimization option enabled (whole module optimization?) these calls could be optimized out. That is, with Swift they can make the resulting machine code less er, "checking for safety", so to speak.


This was done at -O I believe (whatever the default is for "Profiling" in Xcode). This is anecdotal, but the fact that the code isn't littered with _swift_retain/_swift_release calls probably means that most of the standard reference-counting boilerplate has been optimized away.


Yeah, Swift-most-everything is pretty slow, but particularly parsing/generating. Pre-Swift Foundation serialisation code was already...majestic, and in the Swift conversion they've typically managed to slow things down even further. Which didn't seem possible, but they managed.

I have given a bunch of talks[1] on this topic, there's also a chapter in my iOS/macOS performance book[2], which I really recommend if you want to understand this particular topic. I did really fast XML[3][4], CSV[5] and binary plist parsers[6] for Cocoa and also a fast JSON serialiser[7]. All of these are usually around an order of magnitude faster than their Apple equivalents.

Sadly, I haven't gotten around to doing a JSON parser. One reason for this is that parsing the JSON at character level is actually the smaller problem, performance-wise, same as for XML. Performance tends to be largely determined by what you create as a result. If you crate generic Foundation/Swift dictionaries/arrays/etc. you have already lost. The overhead of these generic data structure completely overwhelms the cost of scanning a few bytes.

So you need something more akin to a steaming interface, and if you create objects you must create them directly, without generic temporary objects. This is where XML is easier, because it has an opening tag that you can use to determine what object to create. With JSON, you get "{" so basically you have to know what structure level corresponds to what objects.

Maybe I should write that parser...

[1] https://www.google.com/search?hl=en&q=marcel%20weiher%20perf...

[2] https://www.amazon.com/gp/product/0321842847/

[3] https://github.com/mpw/Objective-XML

[4] https://blog.metaobject.com/2010/05/xml-performance-revisite...

[5] https://github.com/mpw/MPWFoundation/blob/master/Collections...

[6] https://github.com/mpw/MPWFoundation/blob/master/Collections...

[7] https://github.com/mpw/MPWFoundation/blob/master/Streams.sub...


That resonates well with my conclusions that led to the Replicated Object Notation project. [1]. If the parser creates an AST tree or some number of dictionaries or some other bullshit... "now you have two problems", that's it.

I settled on a tabular-log format, which is streamed and immediately consumed most of the time, no intermediate object structures.

Then, that "text vs binary" distinction became mostly moot. The binary is slightly more efficient, but grossly less readable, so no big gain, unless at grand scale.

[1] http://replicated.cc


What are you using? Have you tried NSJSONSerialization? It’s quite fast (am very curious how it shows in these benchmarks), but I don’t think it does the fancy Codable stuff.


You might want to check out the benchmark I wrote to compare exactly that.

https://github.com/jeremywiebe/json-performance


Swift has JSONEncoder and JSONDecoder types to do Codable, though internally they have to encode to/decode from the Foundation objects that JSONSerialization produces.


Hey Raph, have you seen https://github.com/bmkor/gason? Seems like a low-cost bridge to a high-performance C++ implementation.


Hadn't seen that particular wrapper, but if we're going to take on an FFI solution, we're more likely to use Rust for this, and implement more logic than just JSON parsing.


One of the two authors here. Happy to answer questions.

The intent was to open things but not publicize them at this stage but Hacker News seems to find stuff. Wouldn't surprise me if plenty of folks follow Daniel Lemire on Github as his stuff is always interesting.


I see that you are using MMX intrinsics directly, like _mm_sub_pi8, but you are never calling _mm_empty (https://software.intel.com/sites/landingpage/IntrinsicsGuide...) as required by the SysV AMD64 ABI (and pretty much all other ABIs out there).

I think the behavior of all the code that touches is undefined (it breaks the calling convention of the ABI), and while this often results in corrupted floating point values in registers, maybe you won't see much if you are not using the FPU. Still, since the function is inline, chances that this gets inlined somewhere where it could cause trouble seem high.

You might want to look into that.

Also, I wish this would all be written in Rust, there is great portable SIMD support over there. Might make your life easier trying to target other platforms.

EDIT: as burntsushi mentions below, that's not available in stable Rust, but if you want to squeeze out the last once of performance out of the Rust compiler, chances are you won't be using that anyways.


I would be extremely surprised if we were somehow accidentally using MMX; it's not our intention. It is my belief that we are using only AVX2, which, like the 19-year old SSE/SSE2 extension, has its own registers that are independent of the x87 floating point set.

If, once you review our codebase and verify that we are not inadvertently using a 22-year-old SIMD extension but still have undefined behavior, please write an issue on github.

I'm admiring Rust from a distance at this stage. I am comfortable enough with writing bare intrinsics and slapping a giant #ifdef around stuff.


> there is great portable SIMD support over there

It's not stable yet. The only stable SIMD stuff Rust supports is access to the raw x86 vendor intrinsics.


If they want to squeeze out the last ounce of performance out of the Rust toolchain it probably wouldn't make sense to use stable Rust anyways, so I don't think that's a big downside.

Also, they are already relying on "unstable" (non-standard conforming) C++ features (e.g. the code uses non-standard attributes behind macros, etc.). Using nightly Rust isn't worse than that per se.

Using Rust does have downsides. For the type of code they are writing, the main downside would probably be losing an alternative GCC backend, which might or might not be better than LLVM for their application.

Still, they would win portable SIMD and being able to target not only x86_64 but also ARM, Power, RISCV, WASM, etc., which is always cool to show in research papers.

I'm not suggesting that Rust is a perfect trade-off, only that it's an interesting one depending on what they want to do.


Sure. I'm just trying to be careful that we aren't going around advertising features that aren't stable yet without specifically saying that they aren't stable. It leads to a disappointing expectations mismatch.

I do think stable Rust is perfectly capable though. I don't generally target nightly Rust and am happy with how much I can squeeze out of it. :-) (Check out the benchmarks for the memchr crate, which use SIMD internally and should be competitive with glibc's x86_64 implementation that's in Assembly.)


Most programming languages don't have a stable / unstable distinction at all, and unless one is "in the Rust loop", stating something like "_unstable_ Rust can do X" won't probably mean what the reader think it means.

Unstable Rust sounds very dangerous, like something that breaks every day. Definitely more dangerous than stable Rust.

Yet if one is in the Rust loop, one knows that this is often not the case. I've been using some unstable features on nightly, like const fn, specialization, function traits, etc. for years (3 years?), and I've never had a CI build job fail due to a change to the implementation of these features.

Yet some features in stable Rust like Rust2018 uniform_paths or stable SIMD have caused many build job breaks and undefined behavior due to bugs in the compiler over the last months.

So whatever stability means, it does not mean "using this feature won't result in your code not breaking". It also doesn't mean "you have to use a nightly toolchain to use the feature".

An unstable Rust feature is more like a "compiler extension" in C / C++. It is just something that hasn't fully gone through the process of standardization.

I don't think it is a fair characterization that code that uses this extensions is not Rust. Pretty much all C++ code uses compiler extensions, and nobody says that this code is not C++ just because it uses one of them.

Explaining all of this when telling someone "Rust is a technology that allows you to solve problem X nicely" isn't helpful.

Many people vocal about Rust seem to think that Rust is the end in of itself. The goal isn't solving a problem, but using Rust to solve it. I see many of these people argue that unstable Rust isn't Rust, and that people should be using stable Rust etc. For most people, using Rust isn't the goal, solving their problem is. Whether one or many compiler extensions have to be enabled for that is pretty much irrelevant to them. Sure it would be nice if one didn't need to do that, but it isn't a big deal either. The big embedded community is living proof of that. Only a small minority of this community cares about the language enough to participate in its evolution. Most people don't care enough about that, they have more interesting problems to solve.


You happened to pick some features where there hasn’t been much development work, since other things were prioritized. And one feature that’s not mostly compiler internal.

This is not the general case for unstable features. And promoting the use of them too heavily can cause a lot of problems. It undermines trust in the language, especially given rust’s pre-1.0 reputation (which was well deserved at the time.)

Stuff that’s unstable isn’t in Rust; that’s why it can be changed or even wholesale removed at any time. The distinction is very important.


> Stuff that’s unstable isn’t in Rust; that’s why it can be changed or even wholesale removed at any time. The distinction is very important.

I've seen you talk about "writing an OS kernel in Rust", but never heard you phrase that as "writing an OS in kernel in _unstable_ Rust". I've never seen you stating: "correction: what you are using for embedded development, networking, etc. is not Rust, _but unstable Rust_" on any of the many blog posts, announcements, news, etc. about these topics over the past couple of years. I've seen you reply with that argument every now and then, when someone like me downplays the importance of the distinction, but I've never seen you address the source of that behavior.

If the distinction between Rust, and unstable Rust, is important. Why are the people at the top not making it? If you are working on the compiler, servo, etc. you are actually not programming in Rust, but in _unstable_ Rust all of the time. Are they hypocrites? I don't think so.

If I reflect on why I feel that this distinction is not important, the first thing I realize is that I do think the distinction is important. But this distinction is not binary _to me_, as opposed to how you and burntsushi are putting it.

As you mentioned, some unstable features change more than others. There is a wide range of how much continues breakage does using certain unstable features cause downstream users. Some features break every day, some features haven't broken anything in 3 years.

Are unstable features that haven't broken anything in 3 years stable? No, by definition, they aren't.

Are they practical to use? The answer isn't yes or no, the answer is "depends on how much breakage you are willing to accept". We upgrade C++ compiler ~twice per year, and even though we only write 100% standard compliant code, we have to always fix breakage due to the upgrade. Yet I wouldn't say that standard compliant C++ is an unstable programming language.

So, if consider bi-yearly breakage is stable enough for our professional C++ projects in practice, why would I judge Rust stable / unstable features using a different bar? This does not mean that I believe that using unstable (or only stable) features will never cause breakage, since that is impossible.

I've had stable Rust CI jobs break because the standard library added some new trait method, and that caused an ambiguity that broke in my stable Rust code. The answer was: your code was correct, but we are allowed to break it in this way.

In my opinion, it is not "stable vs unstable", but 99% vs what degree of stability does your project need, where choosing more stability than what it needs puts it at a technical disadvantage. It doesn't matter whether one is talking here about Rust unstable features, or using the super unstable next-gen stable Rust web framework.

The stability line does not lie where I or anybody else decides to arbitrarily put it. It lies exactly on the amount of stability that a particular project can tolerate, and it is up to the judgement of the developers of that particular project to find out where that is.

Telling someone that a particular project is not Rust because the stability line for that project does not fall where your line does feels just wrong. Particularly when those doing it don't make that distinctions about themselves and the projects their work on.


You are making a mountain out of a molehill. We're on HN, not some Rust community space. Context is paramount. If saying, "portable SIMD is available on nightly Rust" or "portable SIMD is available as an experimental extension in Rust" feels better to you than "portable SIMD is available on unstable Rust," then go for it.

There's nothing binary about my position. My only point is to mitigate an expectation mismatch. People get pissed off when they're led to believe that a feature is baked and ready to use, when it actually isn't. Honestly, you've turned a simple correction into a ranty spiraling sub-thread. It's obnoxious.

You're also getting way too hung up on what stability means. "stability" in Rust, in the context of API availability, is a statement of intent and commitment, not a statement of how often a build will break. Of course, there may be a strong correlation between them!


I'm writing an IoT library for devices with tiny microprocessors and have been sending data as JSON or BSON (binary JSON). On the backend, I've been storing reports from IoT devices into a database (MariaDB on AWS). How crazy would it be to just store all the data as JSON files on disk (or S3 bucket) and then batch process them when I need to perform data analysis on them? If a million devices sends dozens of status reports per day, that's going to be a crapton on files... but that might be faster to process than querying the database.

If you or anyone else has some opinions on this, please let me know! I'd really like to learn how people do this type of analysis at scale.


reading lots of small files on s3 or local filesystems is tricky. a million devices with one dozen files, so lets say 12 million files.

One thing locally is each file takes up a full block. So even if you only need 500 bytes of data in a file, and a block is 4kb, youve wasted 3.5kb of space and IO. Multiply that by a million and youre wasting gigabytes of space.

In S3, listing 12 million files takes 12 thousand http(max return is 1000 items). So that would take two minutes if you assume its 10ms per round trip. Let's say you wanted to read each file, and again each read takes 10ms.. youre looking at 1.4 days. Obviously this can be parallelized, but when you look at the raw byte size this is a huge overhead, and this is just to read one day of data.

If you concatenate the files together to get a reasonable size and number of files, raw json on s3 is really powerful. Point athena at it, and you just write sql and it handles the rest, and is serverless. But it does make single row lookups more expensive(supplementing with dynamodb could keep it serverless if single row lookups are frequent).

lots of optimizations will get improvements, like parquet that tobilg mentioned(binary format and columnar), but anything with a decent file size will work.


Yeah, this is what Kinesis Firehose is for. Send all of your messages there and it will batch them to S3.


You may enjoy this:

The best way to not lose messages is to minimize the work done by your log receiver. So we did. It receives the uploaded log file chunk and appends it to a file, and that's it. The "file" is actually in a cloud storage system that's more-or-less like S3. When I explained this to someone, they asked why we didn't put it in a Bigtable-like thing or some other database, because isn't a filesystem kinda cheesy? No, it's not cheesy, it's simple. Simple things don't break.

https://apenwarr.ca/log/20190216


We‘re using AWS Kinesis delivery streams to batch incoming JSON messages from IoT devices to Parquet files in S3. Those can directly be read by different AWS services like Redshift, EMR or Athena...


We use Athena for all our robotics data, which we ETL into JSON. It's fantastic for queries that are simple time-slice queries, as most are because sensor data is inherently time-series. When more complicated joins are necessary, the performance is there across terabytes, and the cost is very very low, $5 per terabyte scanned (storage costs are another thing).


What bothers me about Kinesis is that it is prohibitively expensive at scale if you don't compress your data before putting it to Kinesis.

But if you want to use the nice features like parquet conversion your data can't be compressed.

If it could handle compressed data at the same price I would use a lot more of it.


You’ve kinda just described AWS Athena.


This comment needs to be higher up; Amazon has a service for doing just this, dumping 'dumb' files (like json, csv, etc) into S3 buckets and performing SQL queries on them. No need to have to think about how to store things for future querying.


I've used Athena really effectively to solve similar problems. If your data storage is relatively small and/or your queries relatively infrequent, JSON can be a good fit. As one of those dimensions expands, you can decrease costs/increase performance by converting to Parquet and compressing.


I am replying to you as an engineer at an IoT company that provides SaaS in AWS for the data our devices produce. To solve this problem, we transmit our data in a proprietary "raw" binary format that then gets parsed into a protobuf. All data for a given UTC day is appended to this protobuf file and hosted in S3. Retrieving data requires downloading the protobuf file from S3, unmarshalling the protobuf, and finding the entry you care about.


If you are considering using plain files instead of a DB server, you could try a compromise and use an embedded key-value store like RocksDB, LevelDB, BadgerDB etc.

It's local storage only, limited query capabilities depending on the DB, but should be extremely fast.


Why not use a timeseries database, like http://btrdb.io?


well if you need indexed lookups, then use a database

if you're doing "table scan" processing of entire datasets, sure just-a-bunch-of-files would work too.

Databases can be surprisingly fast for things like that, since high performance file i/o is full of tricky/annoying stuff that databases have already optimized for.


Depending on your size / budget / needs Snowflake may interest you. https://www.snowflake.com/product/architecture/.

I haven't used it but have been given a presentation by them on it, and it was very very good.

They store data in S3 and use FoundationDB for indexes. You can feed it JSON and it'll index it and let you query it on a massive scale shockingly fast.

Obviously they are not aimed at small hobby projects but if your project has money / serious product depending on your needs it's well worth looking at.

On the S3 cheaper / smaller end you can batch up data daily / weekly etc. So the landing bucket acts as a queue that gets processed creating daily batch files from the small files aggregated together. You can then take the daily batches to create weekly batches etc etc, essentially partitioning. This will reduce the total number of files needed to query. If you use deterministic names based on how you plan to query this can also reduce the number of files you need to list / parse. When batching / re-partitioning the data you can also use the Apache Parquet format to compress a little better + also import in some of the querying tools out there.


I've written my fare share of performant code over the years, but this is some next level shit. I've been reading it the last few hours. The only question I have is what is the term for that place considered two degrees past black magic? Since you live there, I have to assume you know the name.


It's not magic. The things that enable writing this kind of code are essentially practice and specialization. Most people have to write code that works all all architectures and where performance is probably less critical than having a simple, workable codebase - so the opportunities to practice writing SIMD code are rare under those constraints.

Unfortunately, the fragmentation of SIMD standards and various pitfalls in implementation (the much ballyhoo'ed "running AVX will make your processor clock to half its speed or something" exaggerations, for example) make a lot of people nervous about putting in the time to commit to developing expertise, which is a shame.


Not really a question, but if you ever get to the point of wondering what a good next challenging project would be, consider generalizing some of these techniques into a next generation Yacc / Bison replacement.

Something that can take generic grammer rules and turn it into a high performance parsing engine.

It wouldn't have to support every possible grammar or option. Json isn't that complex of a language, but even a limited set of grammar options in exchange for a performant parser could be of benefit for a very large set of problems.


It's on the list as a research project. It's not obvious to me at this stage that the bottlenecks for more advanced parsers are necessarily going to be in the same place as they are for JSON. It might make more sense to look at a state-of-the-art parser and see if we can contribute a few tricks instead.


That sounds interesting. Where is the best place to follow your future work? Your & Daniel Lemire's Github, or elsewhere?


I might go so far as to post to branchfree.org, and Daniel posts at https://lemire.me/blog/ so either of those, plus github, ought to cover it.


I'm just starting to look at Tree-sitter; that might qualify as a state of the art parser that could use a few tricks.


Oh now that would be an interesting tool


Any chance to have a similar thing for s-expressions? I parse GBs of them and Common Lisp reader is very slow.


Probably not too hard. It would come down to how easy it is to detect quoting conventions so you don't accidentally parse () chars in strings. JSON is medium-easy. I don't know where the canonical definition of s-expressions you're using comes from (is it just Common Lisp?) so I don't know how this works.

We'd like to have some more examples of formats people care about - I'm interested in generalizing this work. So if you want to followup with more detail please do.


As a clojure user, I care about EDN, but its probably too niche to spend your time on.

https://github.com/edn-format/edn


Yes!!! A generalization for other kinds of simple grammars would be awesome.

On another note. As a js programmer who deals with a ton of json, I would love v8 to adopt some of the tricks into their json parser.


Any technical blog articles you have that explain how you were able to ascertain these incredible performance gains?

Kudos on some incredible work! :)


Thank you. More description of the work will be forthcoming but please be patient (for non-sinister reasons).


Jsmn it's already pretty fast and simple. How the hell can be a lot faster than that? I'm very curious.


The big difference between RapidJson and sajson is surprising to me. When I benchmarked them their performance was comparable: https://github.com/project-gemmi/benchmarking-json . Did you use RapidJson in full-precision mode?

By the way, nativejson-benchmark (from RapidJson) has a nice conformance checker that tries various corner cases. But you probably know it.


More performance details beyond what's on the site will follow (in a while).

We use RapidJSON in the high-performance mode not the funky mode that minimizes FP error (which is some astounding work - I had no idea that strtof was so involved!). Number conversion is not our #1 focus - doing it well is nice, but all implementations have access to the same FP tricks, so you don't really learn much by going wild on this aspect.

At least, you don't unless FP conversion is your focus, in which case you should share your FP conversion code with everyone!


You should take a look at std::from_chars IIRC it can completely destroy other parsers within the stdlib because it's not intended to take locale into consideration.

https://en.cppreference.com/w/cpp/utility/from_chars


I recently saw people using gpu to parse csv files. there are also other articles on using gpu to parse json. do you think if gpu can perform well on this type of tasks?


I'm not aware of an article that covers a actual implementation or that has a benchmark of performance. As for GPGPU: it's possible. Our first stage of matching is very parallel. But Amdahl's law would, of course, suggest that the serial parsing step would dominate.

I'm interested in this: some aspects of our very serial 'stage 2' (the parsing step) could be made parallel. This would be very interesting. Unfortunately I personally cannot be made parallel, so working on this needs to go into a big queue with a lot of other work.


How hard would it be to extend the parser to handle arbitrary-precision numbers? Strictly speaking the JSON spec does not require numbers to fit into 64-bit ints / doubles.


Daniel Lemire did most of the work on the number handling, but our general approach was to try to do work that's similar to what the bulk of other libraries do. I believe pretty much everyone throws oversize numbers on the floor.

I don't think it would be hard at all; it would just be extra effort that wasn't needed to run obvious comparisons.


Jackson benchmarks? I've heard it's twice as fast as rapidjson.


Why did you decide to write this? What was the motivation?


Honestly? I was trolled into it. :-) Unemployed people do weird things.

I can't speak for Daniel's motivation.


Any plan for wrapper for android?


This would imply an ARM port, I guess, as x86 android isn't much of a thing anymore AFAIK.

I don't think either of us know much about android - not enough to do that. But an ARM port is very interesting.

Since I'm no longer an Intel employee I don't see why I shouldn't skill up and do a Neon port (I got interested in SVE, but since ARM doesn't seem to want to bother releasing cores that run SVE, I'm not going to go too far down that path right now). Neon, on the other hand, is in tons of places. As far as I know all the required permutes, carryless multiplies and various other SIMD bits and pieces are there on Neon. So it's a simple matter of porting.


If you're working with json objects with sizes on the higher end quite often you're not going to need the entirety of them, just a small part of them. If that is the workload what then to do is simply parse as little data as possible: skip the validation, locate the relevant bits, and then start parsing, validation and all the stuff. In this optimizing the json scanner/lexer gives much greater improvement than optimizing the parser.

Though this job is trickier than it may look. The logic to extract the "relevant" bits is often dynamic or tied to user input but for the scanner/lexer to be ultrafast it has to be tightly compiled. You can try jitting but libllvm is probably too heavyweight for parsing json.


Jitting is a common tool that people seem to reach for whenever they are parsing or lexing anything at any time. It's really not necessary; there are plenty of fast search methods out there.

JIT approaches make a lot of sense for lex/yacc and their numerous descendants, as these typically need to put a lot of extra logic into the process of parsing. You don't need to JIT just to look up some strings and/or parse a fairly simple hierarchical structure.


Parsing itself doesn't need jitting but as soon as you start to use the parsed data to interface with some typed containers the data plumbing consumes much more time than parsing does and drags down all the optimization. For parsing to interact well with static languages jitting is a possible solution to look at.


I agree that's a good strategy for big JSON. Do you know of any such "lazy" parsers?

I think the problem is that to extract arbitrary keys, you really need to parse the whole thing, although you don't need to materialize nodes for the whole thing.

But if you have big JSON with a given schema, you may be able to skip things lexically. You basically need to count {} and [], while taking into account " and \ within quoted strings.

That doesn't seem too hard. I think a tiny bit of http://re2c.org/ could do a good job of it.


For node.js, I wrote a lib that can selectively parse JSON subtrees:

https://gitlab.com/philbooth/bfj

The specific function of interest here is `bfj.match`, which takes a readable stream and a selector as arguments:

https://gitlab.com/philbooth/bfj#how-do-i-selectively-parse-...

It still walks the full tree like a regular parser, but just avoids creating any data items unless the selector matches. Though there is an outstanding issue to support JSONPath in the selector, currently it only matches individual keys and values.


It’s not exactly the lazy parser you describe, but Sparser[1] builds filters to exclude json lines/files that can’t contain what you’re looking for, and only parses those that might.

The Morning Paper’s writeup[2] from last year provides a good summary

[1]: http://www.vldb.org/pvldb/vol11/p1576-palkar.pdf [2]: https://blog.acolyer.org/2018/08/20/filter-before-you-parse-...


This work is somewhat orthogonal to ours as it assumes that you can locate JSON records without doing parsing; if I remember correctly, it groups JSON records as lines. If your JSON has been formatted to conform to this, I suppose it would be quite effective.


That's what our first stage does, pretty much. I would imagine we do it way faster than re2c would do it.

Parsing the entire document lock stock and barrel is an easier thing to write about and benchmark. The problem is with skipping around and pulling out bits of JSON from a benchmarking framework is that attempting to present such data often amounts to "hey, we asked ourselves a question and then we got a really good answer for it!". It's hard to picture what a 'typical' query for some field over a JSON document would look like. Conversely, it's pretty easy to know when you finished parsing the Whole Thing.


> It's hard to picture what a 'typical' query for some field over a JSON document would look like.

Exactly. A "query" would have to define not only the path, type of the field in the source data but also the type/interface of where you want to put that data. Combining dynamic queries and typed data you get a fairly tricky problem, which is why I said this is tricky. I worked on a similar thing for protobuf and jitting was a solution I looked into (in that project libllvm was too unwieldy to use).


I'm not sure what you mean by arbitrary? What parsing in this case means e.g. turning a string of digits into a ieee754 float number in memory. I think this project is meant to accelerate this part with SIMD, but a greater improvement can be obtained by simply not doing this for as much data as possible. If the actual materialized data constitutes a small part of the original, there should be ways to do minimum work for the rest.


In jvm-land circe-fs2[1] is a streaming Parser.

[1]: https://github.com/circe/circe-fs2/blob/master/README.md


Depending on usecase, the JSON lines format can make this into a pretty simple task! Obviously has to fit in with one's data structure though.


Number handling looks like it would be a problem. There are Test suites for json parsers and lots of parsers that fail a lot of these tests. Check e.g. https://github.com/nst/JSONTestSuite which checks compliance against RFC 8259.

Publishing results against this could be useful both for assessing how good this parser is and establishing and documenting any known issues. If correctness is not a goal, this can still be fine but finding out your parser of choice doesn't handle common json emitted by other systems can be annoying.

Regarding the numbers, I've run into a few cases where Jackson being able to parse BigIntegers and BigDecimals was very useful to me. Silently rounding to doubles or floats can be lossy and failing on some documents just because the value exceeds max long/in t can be an issue as well.


> We store strings as NULL terminated C strings. Thus we implicitly assume that you do not include a NULL character within your string, which is allowed technically speaking if you escape it (\u0000).

I lost count to broken JSON parsers which all fall to that.


Yeah, this is unforgivable, and for me makes the whole speed argument void.

Edit: to be fair, they handle a couple of other things, which many similar libraries ignore. I particulary like the support for full 64bit integers. And at least they document their limitation on NULL bytes.


"Unforgivable" is a bit strong. I don't think this is something which invalidates our entire approach - nothing in the algorithm depends on this behavior as the \0 chars don't appear until quite late. Even then, we are not dependent on sighting a \0 in our string normalization and as such we can probably just store a offset+length in our 'tape' structure rather than assuming we have null terminated strings.

Please add an issue on Github.

Edit: I went ahead and added an issue. Seems like something we should fix.


I feel like if you need to parse Gigabytes per second of JSON, you should probably think about using a more efficient serialization format than JSON. Binary formats are not much harder to generate and can save a lot of bandwidth and CPU time.


I have in the past parsed terabytes of JSON. The specific use case was analysing archived Reddit comments. The Reddit API uses JSON, and somebody [1] runs a server that just dumps them in a file, one line of JSON per comment, and offers them for download (compressed, obviously). So now you end up with Gigabytes of small JSONs per month, and anything you do will be quickly dominated by JSON parsing time.

You could store them in some binary format, but the API response format changed over the years with various fields being added and removed, and either your binary format ends up not much better than JSON or you end up reencoding old comments because the API changed.

1: http://files.pushshift.io/reddit/


The parsed format in tape.md is quite close to the flatbuffer format. Flatbuffer can encode any json file just fine. The parse time is immediate and requires no extra memory.

It’s a great way to store big json files where you only want to access a subset of data very quickly and not load the whole file into memory.

https://google.github.io/flatbuffers/


> either your binary format ends up not much better than JSON or you end up reencoding old comments because the API changed

Those are other options too, eg, storing the schema separately from the records (then batching records with identical schemas in compact binary files) and defining migration rules between different schemas (eg, if schema A has required field "foo" while schema B has required field "foo" and optional field "bar" then data which follows schema A can be trivially migrated to schema B at read time without needing to reencode on disk).

https://avro.apache.org/docs/current/


Maybe they want to convert incoming JSON to a binary serialization format to save bandwith, storage and CPU time on the rest of the pipeline ;)


That’s a nice sentiment but we don’t always get to choose.


I agree. But JSON serialization is very complicated for very little gain. It would make it impossible to do things like opening the json file in an editor to change some property names. So watch out for premature optimization.


What if you're ingesting thousands or millions of small feeds? You might not have much control or desire to dictate format to your clients


Yeah not everyone, I’d even say the majority of people, are using software parsing libraries where they are in control of the input data format.


For storing stuff yourself, sure, but as a web developer, most data I consume is JSON served by some third-party REST API and the format they serve me is definitely not under my control. Anecdotally, most developers I know or have spoken to are in similar situations for a large portion of their data-processing needs (at least, for stuff that's not in a database, although even in DB's, JSON is increasingly popular for a number of reasons).

Even for output, there is the common case where your clients expect JSON because its the de facto standard and is super accessible (every language has parsers for it), so you have little choice but to serve your data as JSON.


The readme specifies that it’s not optimized for reading a large number of small files.


This would be an easy extension if you wanted to concatenate the files. The plumbing and API aren't there right now, but it isn't hard to see how to do it.


I guess the question is, what do you parse it to? I'm guessing definitely not turning objects into std::unordered_map and arrays into std::vector or some such. So how easy it is to use the "parsed" data structure? How easy is it to add an element to some deeply nested array for example?


The ParsedJson type is immutable and accessed mutating iterators (up and down the tree, forward and backward through members and indices).

My immediate thought is to compare it to rapidjson, which I've used before. The paradigm of mutating iterators seems awkward at first but should be just as powerful as rapidjson's Value. For example, both approaches end up doing a linear scan to find an object member by name.

The fact that rapidjson supports mutation of Values and simdjson does not has huge implications (as mentioned in the simdjson README scope section), I suspect this tradeoff explains most of the performance differences as I know rapidjson also uses simd internally.


Is there a reason these fast json libraries seem to favor doing linear scan for object representation?


Faster to build than a hash map, less code (which is also better for icache), etc.

JSON Objects tend to have few enough values that it doesn't matter a ton anyway.


The data is put into a "ParsedJson" object: https://github.com/lemire/simdjson/blob/master/include/simdj...


That header mentions a tape.md describing the format. It's really interesting:

https://github.com/lemire/simdjson/blob/master/tape.md


I can't speak for this project, but my own for CSV files ( https://github.com/dw/csvmonkey ) provides a high level interface that allows the tokenized data to be manipulated in-place without full decoding. The interface exported in Python is that of a plain old dictionary with one added magical semantic (lazy decode on element access). The internal representation of the parse result is a simple fixed array of (ptr, size) pairs

Methods like this are used for batch search / summation where only a fraction of the parsed data is actually relevant during any particular run. You'll find similar approaches used in e.g. the row format parser of a database like MongoDB or Postgres


into a token stream?


Isn't that just lexing?


> Requirements: […] A processor with AVX2 (i.e., Intel processors starting with the Haswell microarchitecture released 2013, and processors from AMD starting with the Rizen)


Also noteworthy that on Intel at least, using AVX/AVX2 reduces the frequency of the CPU for a while. It can even go below base clock.


iirc, it's complicated. Some instructions don't reduce the frequency; some reduce it a little; some reduce it a lot.

I'm not sure AVX2 is as ubiquitous as the README says: "We assume AVX2 support which is available in all recent mainstream x86 processors produced by AMD and Intel."

I guess "mainstream" is somewhat subjective, but some recent Chromebooks have Celeron processors with no AVX2:

https://us-store.acer.com/chromebook-14-cb3-431-c5fm

https://ark.intel.com/products/91831/Intel-Celeron-Processor...


Because someone wanting 2.2GB/s JSON parsing is deploying to a chromebook...


It doesn't seem that laughable to me to want faster JSON parsing on a Chromebook, given how heavily JSON is used to communicate between webservers and client-side Javascript.

"Faster" meaning faster than Chromebooks do now; 2.2 GB/s may simply be unachievable hardware-wise with these cheap processors. They're kinda slow, so any speed increase would be welcome.


AVX2 also incurs some pretty large penalties for switching between SSE and AVX2. Depending on the amount of time taken in the library between calls, it could be problematic.

This looks mostly applicable to server scenarios where the runtime environment is highly controlled.


There is no real penalty for switching between SSE and AVX2, unless you do it wrong. What are you referring to specifically?

Are you talking about state transition penalties that can occur if you forget a vzeroupper? That's the only thing I'm aware of which kind of matches that.


I wonder how this compares to fast.json: "Fastest JSON parser in the world is a D project?" (https://news.ycombinator.com/item?id=10430951), both in an implementation/approach sense and in terms of performance.


Will this work on JSON files that are larger than the available system memory?

Firebase backups are huge JSON files and we haven’t found a good way to deal with them.

There are some “streaming JSON parsers” that we have wrestled with but they are buggy.


Sadly it will not. Arguably we could 'stream' things, but we don't have an API or a use case for it. If you could capture your requirements and put them on an issue on Github, it would be helpful. We're not against the streaming use case, we just don't understand it very well.


Probably not. I requires a memory allocation the size of the file for parsing.

However they have the ability to build a tape out of the json and find the interesting marks. Perhaps it can be adapted to make a fast parser than only parses the relevant stuff but zooms through the large file in blocks.


Any chance of something similar for CSV? (full RFC-4180 including quotes, escaping etc).

Terabytes of "big data" get passed around as CSV.


CSV is on our list; this is a simpler task than JSON due to the absence of arbitrary nesting.


I doubt someone using CSV for big data is going to follow that rule...


What do you mean? It's not a rule, it's just not possible in the CSV format to have arbitrary nesting.


It's probably relevant to mention https://github.com/BurntSushi/rust-csv. It uses a state machine (which seems to be the author's expertise) to parse CSVs really fast. Based on some other work, you can do better if you use some of the new SIMD instructions.


I've developped a full RFC compliant CSV parser with Python bindings and supporting SSE4 to AVX-512 instruction sets, however i'm struggling with my hierarchy to open-source it at the moment.

But, the goal of my message is not to tease you with an unavailable code. It's just to say it is a lot more simpler to write a CSV parser than a JSON parser.

So, do not hesitate to write one yourself ! It's easy and a nice way to introduce yourself to SIMD instructions.


What happens of the parsed data ? Do the benchmarks account for the time to access that data after parsing ?


Perhaps I'm misunderstanding or don't have a good enough grasp of this, but, in what circumstance would you need to parse gigabytes? I've only seen it be used in config files, so...


What usually happens is someone creates an API, one which did not initially have to handle much data, and then it just grew over time. (I guess it's similar to how a lot of the Internet's early application-layer protocols like HTTP, SMTP, etc. are text-based --- the text format was initially more "convenient" for a variety of reasons, but obviously is not very efficient at scale.)

Or, perhaps a more common scenario today, it was designed by people who simply had no knowledge of binary protocols or efficiency at all --- not too long ago I had to deal with an API which returned a binary file, but instead of simply sending the bytes directly, it decided to send a JSON object containing one array, whose elements were strings, and each string was... a hex digit. Instead of sending "Hello world" it would send '{"data":["4","8"," ","6","5"," ","6","C"," " ... '


Log files? More and more places are switching to easily machine-parsable logs to run statistics and checks over, and JSON is a common format (e.g. because it's still somewhat human-readable and will work over logging infrastructure set up to transport lines of text)


There are some quite big JSON files out there; you might also be interested in parsing megabytes but not spending more than 1ms to get through it.


If this kind of work is interesting to you, you might like Daniel Lemire's blog (https://lemire.me/blog/).

He's a professor, but his work is highly applied and immediately usable. He manages to find and demonstrate a lot of code where we assume the big-O performance, but the reality of modern processors and caching (etc.) mean very difference performance in practice.


Thanks for posting. I've been working with lidar/robotic data more recently and it's nice to work with JSON directly, when the performance is good enough.


> All JSON is JavaScript, but not all JavaScript is JSON

Really? I thought they diverged specifications long enough ago (though using those extras could be discouraged in some cases).


The JSON spec [1] never had any updates, so it couldn't have diverged.

Kudos to Douglas Crockford for keeping it simple. I wish more standards committees would take a cue from him. (Looking at ECMAScript [2] and C++.)

There's been a tremendous amount of growth and value around JSON precisely because it's so simple and easy to implement.

People complain about the lack of comments and trailing commas, but I think those are really expanding on the initial use case of JSON, and the benefit isn't worth cost of change. JSON does some things super well, other things marginally well, and some not at all, and that's working as intended.

You can always make something separate to cover those use cases, and that seems to have happened with TOML and so forth.

(I recall there was an RFC that cleaned up ambiguities in Crockford's web page, but it just clarified things. No new features were added. So JSON is still as much of a subset of JavaScript as it ever was. On the other hand, JavaScript itself has grown wildly out of control.)

[1] http://json.org/

[2] https://news.ycombinator.com/item?id=18766361


https://en.wikipedia.org/wiki/JSON#Data_portability_issues :

> Although Douglas Crockford originally asserted that JSON is a strict subset of JavaScript, his specification actually allows valid JSON documents that are invalid JavaScript. Specifically, JSON allows the Unicode line terminators U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR to appear unescaped in quoted strings, while ECMAScript 2018 and older does not.


That bit of incompatibility will be going away when this proposal is implemented, however:

https://github.com/tc39/proposal-json-superset


It is already implemented in the current Firefox, Chrome and Safari 12.


Yeah I remember that quirk, and that's why I said it's "as much of a subset as it ever was". :) Because of this issue, it was technically never a subset.

But almost all real JSON documents are subsets of JavaScript, unless they happen to have those characters.

And the salient point is that if JSON never changes, then no further divergence from JavaScript is possible.


But remember that your comment wasn't actually addressing avmich's objection to the assertion "All JSON is JavaScript, but not all JavaScript is JSON".

That assertion is indeed incorrect.

avmich then wrote "I thought they diverged specifications".

That is also correct. JSON was meant to be a perfect subset of JavaScript. Instead, and by accident, it diverged from the relevant specification.

Your comment instead was mostly focused on opposition to changing the existing JSON specification, which is a different topic.


> JSON allows the Unicode line terminators U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR to appear unescaped in quoted strings, while ECMAScript 2018 and older does not.

My code has parsed a lot JSON and that is new data to me. Thank you for that!

Do you know the historical reasoning for this particular deviation? Are there any infamous bugs or common use cases this departure impacts?


Agree.

This is another useful resource, discussed here already - http://seriot.ch/parsing_json.php - which lists relevant standards. But "the" standard is static, so divergence, is any, is with other standards (different from json.org) vs. evolving JavaScript.


> People complain about the lack of comments and trailing commas,

Yeah, I don't think JSON should include those things. I think the lack of comments makes JSON a poor format for config files, but that just means you should use another format for config files. JSON is good for machine-to-machine communication.


Basically saying any valid-format JSON is valid JS as well. But JSON doesn't have any programming features (or the nice things like non-quoted keys/trailing commas)


This is a dangerous assumption to make, and one that bit us a while ago when using trigger.io for an app.

We had a lot of user supplied data in the strings of our API responses, some of it copied from Word documents and were ridden with U+2028 and U+2029 whitespace. Turns out that on iOS, the trigger.io library makes the all too popular assumption that any well-formated JSON can be interpreted as JS, and parses the responses with "eval", thus turning all those unicode characters _within JSON strings_ into newlines!


What's the current state of the art in doing this on GPU?


To my knowledge, it is limited to posting "Towards JSON Parsing on a GPU" type articles. Writing that sort of article is easy and fun, without the tedious burden of implementing things.


I'm curious how fast the sqlite json extension is for validation and manipulation of json data when compared to this library.


OT, but I notice it can be run by #include-ing the simdjson.cpp file. How common is this in CPP projects?


It seems like there are quite a few single-header C++ libraries: https://github.com/nothings/single_file_libs

The people complaining about dependency management in Python should try doing it in C++; there seems to be half a dozen competing ones. And three times as many build systems.


Honestly, this is a cool hack. But it's not the best way to shuttle that much data around.

It's a hammer on rocket fuel.


Would it be possible to make a native module out of this for node?


Here's the node bindings for rapid json, I'm assuming it would be similar.

https://github.com/matthewpalmer/node-rapidjson


Thank you!

Though from the readme on that module the dev says "it turns out that you’re better off using the normal Node.js/V8 implementation unless you’re operating on huge JSON.

... the bridging from V8 to C++ is a bit too costly at this stage."


That was two years ago though, not sure what improvements the N-API has in newer versions of nodejs.


Is this faster than the browser’s native parsing speed I assume?


With this work on an Arduino?


This code in particular won’t, since it relies on a particular extension of the x86 instruction set. I don’t believe Arduino compatible chips have simd instructions, but if they do, a similar approach could be taken.


I'm not aware of any SIMD-capable Arduino chips; even when Quark was a thing, it didn't support SIMD.

It's possible to do SWAR (SIMD Within A Register) tricks to try to substitute, but on a 32-bit processor (or even a 64-bit processor) I doubt our techniques would look good. In Hyperscan, my regex project, we used SWAR for simple things (character scans) but I doubt that simdjson would work well if you tried to make it into swarjson. :-)


I wonder if it's possible to do something with bitslicing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: