Hacker News new | past | comments | ask | show | jobs | submit login

> This approach would simplify compile-time code generation, but it doesn't help interpreted languages that have no compile step at all.

Well, it does if they have macro systems like Scheme :) Of course, mainstream interpreted languages don't.

> I don't know what this means, but it sounds really interesting, do you have a reference with more info?

There isn't an official writeup that I'm aware of, but I can briefly explain what it is. In precise GCs, you have to have a way for the GC to traverse all the GC'd objects that a particular object points to. This is a problem in languages like C++ that have no or insufficient reflection. Traditionally the solution has been for everyone to manually implement "trace hooks" or "visitor hooks" for all objects that participate in garbage collection, which are C++ methods that enumerate all of the objects that a given object points to; this is what Gecko (with the cycle collector) and Blink (with Oilpan) do. But this is tedious and error-prone, and is especially scary when you consider that errors can lead to hard-to-diagnose use-after-free vulnerabilities.

We observed that this is a very similar problem to serialization; in serialization you want to call a method on every object that a given object points to (to serialize it), while in tracing you also want to call a method on every object that a given object points to (to trace it). So we decided to reuse the same compiler infrastructure. This has worked pretty well, despite the weirdness of saying `#[deriving(Encodable)]` for GC stuff. (We'll probably want to rename it.)




> Well, it does if they have macro systems like Scheme :)

I wouldn't generally count that because generated code in dynamic languages is (in my experience) an order of magnitude slower anyway.

For example, generated protobuf-parsing code in Python is something crazy like 100x slower than "the same" generated code in C++. Python might not be the best example since it's a lot slower than other dynamic languages like JavaScript or Lua (don't know about Scheme). But in general my experience is that generated code in dynamic languages isn't in the same ballpark as generated code in a low-level language like C/C++ (and probably Rust).

> So we decided to reuse the same compiler infrastructure.

Very interesting. What is the function signature of the generated functions? Are you saying that the functions you generate for serialization are the same (and have the same signature) as the functions you generate for GC tracing?


> Very interesting. What is the function signature of the generated functions? Are you saying that the functions you generate for serialization are the same (and have the same signature) as the functions you generate for GC tracing?

Yes, they're the same. They take the actual serializer (JSON, or YAML, or the GC, etc) as a type parameter so that you can just write `#[deriving(Encodable)]` and have it be used with different serializers. Type parameters are expanded away at compile time, so this leads to zero overhead at runtime.


Got it, so it looks like "Encoder" is this trait/interface: http://static.rust-lang.org/doc/master/serialize/trait.Encod...

I think of what you call "Encoder" as a "Visitor" (just in case you're looking for renaming input :)

So the function that you are generating at runtime is similar to a template function in C++, templated on the specific serializer (Encoder/Visitor/etc).

One thing that this approach does not currently support (which admittedly is probably not required for most users, and is probably not in line with the overall design of Rust) is the ability to resume. Suppose you are serializing to a socket and the socket is not write-ready. You would need to block until the socket is write-ready (or allocate a buffer to hold the serialized data). This interface doesn't provide a way of suspending the visit/encode and resuming it later.

This also doesn't seem to have a way of identifying the fields -- is this true? Like if you were trying to encode as JSON but wanted to know the name of each field?


`encode_field` has the name in it—was there something else you were thinking of? `#[deriving(Encodable)]` should be able to read attributes on fields and provide the name accordingly.

And yes, you can't resume with this interface. You can implement traits for types outside the module they were defined in though, so a "resumable serialization" library could provide a new "ResumableEncoder" type if it wanted to.


The Encoder trait I linked to doesn't seem to list an "encode_field" function -- am I looking in the wrong place?


emit_struct_field, sorry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: