Hacker News new | past | comments | ask | show | jobs | submit login
WebAssembly Troubles Part 1: WebAssembly Is Not a Stack Machine (troubles.md)
175 points by panic on Feb 4, 2019 | hide | past | favorite | 59 comments



[one of the original Wasm designers here]

Responding to the OP, since there is no comment section on the site.

First off, this rant gets the history of Wasm wrong and the facts of Wasm wrong. I wouldn't unload on a random person on the internet generally, but I would like to point a sentence like:

> Not only that, but for the most part the WebAssembly specification team were flying blind.

It's an ad hominem. This really just impugns people and invites an argument. It might be cathartic, but generally it doesn't advance the conversation to cast aspersion like this.

And it's not true. I can tell you from first hand experience that a baseline compiler was absolutely on our minds, and Mozilla already had a baseline compiler in development throughout design. The Liftoff design that V8 shipped didn't look too different from the picture in our collective heads at the time. And all of us had considerable experience with JIT designs of all kinds.

As for the history. The history is wrong. The first iteration of Wasm was in fact a pre-order encoded AST. No stack. The second iteration was a post-order encoded AST, which we found through microbenchmarks, actually decoded considerably faster. The rub was how to support multi-value returns of function calls, since multi-value local constructs can be flattened by a producer. We considered a number of alternatives that preserved the AST-like structure before settling on that a structured stack machine is actually the best design solution, since it allowed the straightforward extension to multi-values that is there now (and will ship by default when we reach the two-engine implementation status).

As for the present. Wasm blocks and loops absolutely can take parameters; it's part of the multi-value extension which V8 implemented already a year ago. Block and loop parameters subsume SSA form and make locals wholly unnecessary (if that's your thing). Locals make no difference to an optimizing compiler like TurboFan or IonMonkey. And SSA form as an intermediate representation is not as compact as the stack machine with block and loop parameters which is the current design, as those extra moves take space and add an additional verification burden.

A final point. Calling Wasm "not a stack machine" is just a misunderstanding. All operators that work on values operate on the implicit operand stack. This is the very the definition of a stack machine. The fact that there is additional mutable local storage doesn't make it not a stack machine. Similarly, the JVM has mutable typed locals and yet is a stack machine as well. The JVM (prior to 6) allowed completely unstructured control flow and use of the stack, leading to a number of problems, including a potentially cubic verification time. We fixed that.

All that said, there might be a design mistake in Wasm bytecode. Personally, I think we should have implicitly loaded arguments to a function onto the operand stack, which would have made inlining even more like syntactic substitution and further shortened the bodies of very tiny functions. But this is a small thing and we didn't think about it at the time.

[edit: Perhaps "ad hominem" is a bit strong. It feels different to be on the receiving of a comment like "flying blind"--it doesn't mean the same thing to the sender and receiver--especially when this was really not the case, as I state here.]


> It's an ad hominem. This really just impugns people and invites an argument. It might be cathartic, but generally it doesn't advance the conversation to cast aspersion like this.

Ignoring any factual incorrectness, I can not see how the author could have made his point in a more respectful way. He clearly has great enthusiasm for WASM and respect for it's authors, I am struggling to see how anyone could have interpreted it as cathartic...

The paragraph in which your excerpt originated makes this pretty clear:

> The developers of the WebAssembly spec aren’t dumb. For the most part it’s an extremely well-designed specification [...] I considered WebAssembly’s design to be utterly rock-solid, and in fact I still strongly believe that most of the decisions made were the right ones. Although it has problems, it’s incredible how much the WebAssembly working group got right considering it was such relatively unknown territory at the time of the specification’s writing.


I think this is a really great example of how even a simple phrase can detract from a whole argument[1]. While reading the paragraph in whole, which I included below[2], makes it clear that the blog author does respect the WebAssembly team it is important to remember to be very careful with words when being critical of work. We all inject a bit of ourselves into our work, so criticism is often taken very personally. So err on the side of grace and assume the creator knows as much as you do, if not more.

[1]: I understand that the comment author has other concerns besides the phrasing, but I'm only focusing on the phrasing right now.

[2]: The developers of the WebAssembly spec aren’t dumb. For the most part, it’s an extremely well-designed specification. However, they are weighed down by WebAssembly’s legacy. WebAssembly started out not as a bytecode, but more like a simplified binary representation for asm.js. Essentially it was originally designed to be source code, like JavaScript. It would be a more-efficient representation thereof but it still wasn’t a proper virtual machine instruction set. Then, it became a register machine, and only at the last minute did it switch to stack-based encoding for the operators. At that point, concepts like locals were quite entrenched in the spec. Not only that, but for the most part the WebAssembly specification team were flying blind. No streaming compiler had yet been built, hell, no compiler had yet been built. It wasn’t clear that having locals would be problematic - after all, C gets by just fine using local variables that the compiler constructs the SSA graph for.


In an engineering discipline, asserting that someone is 'flying blind' could very, very easily be taken as offensive. Knowing what's going on and why is so fundamental to 'good engineering' practice that you basically are calling the people ethical failures. 'Impugn' is a perfectly reasonable word for how someone might react to such an aspersion.

Maybe in the future don't accuse engineers of 'flying blind' if you aren't inviting return fire.

From context there was a lot of conjecture going on, but the big challenge with building something new is what order to build the bits in to give you the most useful information fastest. As the number of people goes above 2 the odds that everyone agrees or that 'everyone' is right drop rapidly toward zero. You do the best you can, and hope it's good enough that you still have time to react to the worst of the decisions you made earlier. But it's not 'flying blind'.


Wow, not the kind of engineering room I'd want to be in. You have to be able to make claims that the other party does not have a complete picture of the situation, and an external critic is indeed going to be vulnerable to the same criticism.

Maybe you would have a point if it were a Linus-style "only a fucking idiot would" rant. But responding to a sincere attempt to defend a design decision as if it were an insult is some prima donna behavior.


> It's an ad hominem. This really just impugns people and invites an argument. It might be cathartic, but generally it doesn't advance the conversation to cast aspersion like this.

It is not an ad hominem in any sense! For one thing, this part isn't even an attack - here the author is trying to explain and essentially forgive why the (allegedly) sub-optimal design was chosen: that there wasn't enough information at the time to make a fully informed decision! He's saying "it wasn't their fault they designed it like so, at the time it probably seemed like the best decision".

Second, even if it were some form of attack - it wouldn't be an ad hominem because it is not an _irrelavant_ personal attack. It is directly relevant whether some group made decisions based on sufficient existing information etc. The author might be totally wrong about the facts, but at least he believes and offers evidence regarding the situation at the outset of development.

It hurts to have your work criticized, and I can't comment on the factual accuracy of the timeline and other claims, but the piece does not come off at all badly-intentioned, personal or otherwise unreasonable: it comes off mostly as purely technical criticism.


That's not technical criticism. You can't know if the devs are 'flying blind', and given the lack of knowledge the author had, they were throwing deprecating words around. It is saying they didn't know what they're doing. Regardless it's still ad hominem, and unnecessary if the author could show it instead of say it.


No: an ad hominem is a specific thing, and this is not an example of one. For something to be an ad hominem, it has to make an assertion that—if true—would still be completely irrelevant to the truth-value of the syllogism, and yet will serve to convince the reader anyway, usually by the rhetorical power of the https://en.m.wikipedia.org/wiki/Halo_effect causing them to think that something that has irrelevant faults must also have (possibly unknown) relevant faults.

An example of an ad hominem: saying that someone must have cheated on their taxes, because they have some depraved sexual kink. Whether or not the assertion (“X is a fetishist”) is true, it is obviously irrelevant to ascertaining the truth of whether X cheats on their taxes.

An example of a not-ad-hominem: saying that someone is more likely to have cheated on their taxes, because they are an old rich white man. This might be stereotyping (i.e. inductive reasoning), it might be a “personal attack”, and it might be disallowed in a debate for any number of other reasons, but it’s not an ad hominem: being in the relevant class really does have some correlation (however small) with cheating on one’s taxes (mostly because all groups other than the relevant group consist of people with less access to the resources that would allow them to get away with cheating on their taxes.) Therefore, the truth-value of the assertion is not entirely irrelevant to the syllogism—so it’s not an ad hominem.


> That's not technical criticism.

To be clear that particular phrase wasn't direct technical criticism - but it was embedded in an article that was largely technical and it was in direct support of the technical arguments (essentially "it ended up like that because no compilers, streaming or not, existed yet").

I think you should look carefully at the definition of ad hominem. You say that the author couldn't have known what he was asserting. Maybe, sure! That doesn't relate to it being an ad hominem though (it would make it simply false). You say the worse are deprecating. I don't really agree, but even if they were that doesn't by itself make it an ad-hominem.

Ad hominem needs all three factors: _personal_, _irrelevant_ and _an attack_.

I think you can make very good arguments that it was not an attack and certainly that it was not irrelevant. I would even argue it's not personal, since it is not an about a personal characteristic of any person, but simply an observation about what point in time an event took place. Like if I said "you were FLYING BLIND because you had to decide whether to take you umbrella before knowing the weather at your destination", it is not even a personal thing: just observing that you had to decide before you had all the information.


>>Not only that, but for the most part the WebAssembly specification team were flying blind.

>It's an ad hominem.

I didn't read that as a criticism at all. He was just saying that the Wasm team didn't have all the information that they would ideally have wanted to have. No idea if that's true or not, but I think you're misreading it if you take it as some kind of ad hominem attack.


I agree that line should be read as "they were doing something novel" as opposed to "they were ignorant".


Flying blind is one of those odd expressions which is historically used to indicate something that is somewhat less negative than what someone would assume, if interpreting it without that context, making it easy to be interpreted differently by different parties.

It's generally used to indicate operating without information that's really required, but historically it's used when that information is missing because someone else should have provided it and didn't do so, leaving those doing the work without information they need. Without that context it sounds like someone is choosing to make a poor choice and work without knowledge they should have. The responsibility for the problem in those interpretations falls on different people, which can make the phrase tricky to use without ruffling some feathers, as it seems to have done here.


To resurrect a dead metaphor, from an aviation point of view, "flying blind" (VFR into IMC such as flying into a cloud when not flying on instruments) is a very dangerous situation that everyone is trained to avoid. If the weather is bad enough then you stay on the ground.


I agree, it's not a situation a sane person chooses to be in when given an alternative, but sometimes it's unavoidable, and I think that implication is part of the common context it's supposed to carry (but is easily lost when reading it literally).


It's perfectly clear what was meant if you read the surrounding context.


> Wasm blocks and loops absolutely can take parameters; it's part of the multi-value extension which V8 implemented already a year ago.

If that's an extension, and it's only implemented in V8 but not in some of the other main WebAssembly platforms, then I guess it's fair to call it out as not being part of WebAssembly.


It's at phase 3: https://github.com/WebAssembly/proposals

Phase 4 requires a second implementation, and then we will ship it.


> It feels different to be on the receiving of a comment like "flying blind"

The "flying blind" idiom just means that you were operating on intuitions developed from experience, but without much guidance from directly relevant factors (post mentions a working compiler with which you could run experiments to see what would and would not work).

I don't see how it could be interpreted as a dig in this context.


Author here:

I'm sorry that you felt attacked by that line, I really tried my hardest to phrase it in a way that didn't assign any blame. I wasn't trying to imply that the team wasn't thinking about these issues, just that real-world implementations of this kind of VM didn't exist yet and so many of the practical issues were difficult to see in advance.

Many of your other issues I directly address in the article itself, for example that optimising compilers can recalculate the information lost when using locals (tl;dr: why recalculate this information when you could include it in the format) and that Wasm started as an AST machine. The JVM works similarly to Wasm, true, but it is generally considered a hybrid stack/register machine. I'd define Wasm as a similar hybrid.

As for the multi-value extension, although that improves codegen for streaming compilers it doesn't reduce complexity unless locals are also deprecated. Something that I don't talk about in the article but that seems like it may be a problem going forward is that Wasm seems to have no mechanism in the format for major version bumps/breaking changes. Unnecessary things like locals and structured control flow (see the second article in the series) cannot be removed even when they are subsumed by more-general features.


> > Not only that, but for the most part the WebAssembly specification team were flying blind.

> It's an ad hominem.

Er, no, it's not. It's not a an attack on the designers as bad people, nor does it serve the role in an argument it would need to serve to be part of an ad hominem argument even if it was.

It may be inaccurate, misleading, , deceptive, uninformed, or a million other kinds of wrong, but it's not ad hominem.


> As for the present. Wasm blocks and loops absolutely can take parameters; it's part of the multi-value extension which V8 implemented already a year ago

Agree with all you wrote. With my own implementation, I'm hesitant to include these extensions until they are "standardized" (I have a one-man toy project, bleeding edge is unreasonable). I understand and have watched as going through the phases seems like quite a slow process (not that it's a bad thing). I think, to avoid fragmentation, it is reasonable for someone targeting WASM at this point to assume the multi value extension is not available. I know there are proposals about runtime capabilities and the like, but with so few of these extensions reaching full spec adoption yet, I don't think they're viable features to leverage even if implemented in the most popular runtimes.


Does the meaning of the statement change if you interpret “the WebAssembly specification team” as referring not to the formal group that got together once it was decided that WebAssembly was “a thing”, but rather to the group of people designing the binary representation of asm.js back before WebAssembly existed? I would think that they were indeed “flying blind”, in the sense that the only “compiler” they were working against was the JS interpreter, and in the sense that they weren’t yet conceiving of there being a formal abstract machine separate from the JS interpreter (this formal abstract machine being what “WebAssembly” essentially refers to) but rather just compiling to “a subset of JS” in a way that seems to make the code run efficiently.

I.e. just change the one word “WebAssembly” to “asm.js” in the sentence you quoted. Consider it a typo. Does the history now read correctly?


It's unfortunate that you felt attacked, because I didn't read the article that way at all. I'm neither a fan of JS nor particularly WASM, and I read the authors article as quite supportive and appreciative, and pointing out a potential area for improvement / documenting experiences -- and I'd love your thoughts on it!

To me, it broke down as:

(1) SSA form would allow for substantially simpler compilers. Adding block/loop parameters while maintaining support for locals as an extension doesn't address that, as compilers would then still have to implement full support for both modes of operation.

(2) The magnitude of the performance impact of this design decision.

I have some limited experience in compiler design but not enough to really have an understanding of the implications.


I would love to work on Wasm (compiler or tooling). Do you know a way to get such a job?


The only companies that I know (outside of the big players) hiring people to work on WebAssembly are Perlin Network and Parity Technologies, both blockchain companies. I work for Parity on Wasm stuff.


Thanks for the pointer!


This is slightly OT, but would you be able to give an estimate of when tail calls will be supported? https://github.com/WebAssembly/proposals says it is at Stage 3 but that doesn't really tell me how much longer it will take until I can use them.


> But this is a small thing and we didn't think about it at the time.

Inlining is likely important for performance. Any chance this will be corrected?


Inlining happens in optimizing compilers, on optimizing compiler IR, so it doesn't really make much difference here, IMO. Not sure how we could correct it in a backwards-compatible fashion.


Inlining is pretty heavy, it's probably best done as an offline optimization rather than in the endpoint.


Recomputing liveness is not really a big deal. Can be quite cheap, especially over a register based IR.

I think that this article overstates the impact of all of this.


Yes. The article is obsessed with the code quality generated by streaming compilers, which is probably the wrong thing to focus on. A real high-performance backend has no trouble reconstructing SSA form and using it for optimizations. But forcing frontends to emit SSA would be a burden on them. (LLVM bitcode formally requires SSA form as well, but this can be worked around by using allocas.)

It might, however, make sense to have another standard "SSA WebAssembly" program representation. There could then be standard tooling to compile vanilla WebAssembly to the SSA form, frontends could choose which variant they want to emit, and backends preferring SSA as input could still be made happy.


Author here:

I'm obsessed with the quality of streaming compiler-emitted code for a few reasons. Firstly, I'm working on an optimising streaming compiler. Secondly, I work for a blockchain company and we can realistically only allow linear-time compilation, this doesn't necessarily mean streaming compilation but we might as well make it both (I explain why we need linear-time compilation in a different article http://troubles.md/posts/why-wasm/). Thirdly, anything that gives streaming compilers more information also means that non-streaming compilers have to reconstruct less information, and lastly in this particular case there is no reason (except for backwards compatibility constraints) why we can't preserve more of the information from the front-end and have streaming compilers emit better code.


Years ago, I thought a single-tier design of an (offline) compiler was best (OVM). And then I thought a heavy offline compiler and two tiers of of the same JIT was best (Jikes). Then I thought an interpreter with an an optimizing JIT was best (HotSpot server). Then I thought a slightly less optimizing JIT was best (HotSpot client). Then I thought that two JITs were best (V8 w/ Crankshaft). Then two JITs with a super heavy optimizing JIT was best (V8 w/ TurboFan). Then I thought a single tier for Wasm was best (V8 w/ TurboFan). Then I thought an interpreter and a heavy optimizing JIT was best (V8 w/ Ignition and TurboFan). Then I thought two JITs for Wasm was best (V8 w/ Liftoff and TurboFan).

I've seen a single baseline compiler go through a metamorphosis from essentially streaming (HotSpot client V1) to full SSA-based with register allocation (HotSpot client today).

In other words, prepare for change. A single tier is probably not going to be your final design.


A streaming compiler can emit really great code even without liveness. It’s not clear to me what optimizations you’re hoping to get from this. To do most SSA optimizations you need a backend that can lower from SSA, which is not linear afaik. Register allocation might be helped a bit by liveness, but you can get block-local liveness information in linear time already - so for your thing to be better you’d have to prove that there is something sweet about having a non-SSA compiler that does register allocation using imperfect liveness information, which was provided by an adversary. Then you’d have to prove that this is ok - that an adversary can’t force you to do more work than you want by lying about liveness. It’s probably not ok; for worst case perf you’re almost certainly better off not trusting provided liveness info and reconstructing it yourself on a block-local basis.

Anyway. I could tell you a lot more about how to design compilers but I have to take my kid to school.


A statically-typed stack machine like Wasm is homomorphic to SSA form with liveness, and it's impossible to lie about liveness in this format. Most of the complexity in the streaming compiler that I'm working on is around producing good code for locals when we have no liveness information for them. I explain why this is in the article.


It’s not guaranteed that using the liveness implicit in the SSA that falls out of a stack language is going to give you better code in less time than a block-local register allocation with locals live at block boundaries spilled to the stack.


Indeed, and for good spilling decisions, you'll want to have next-use distance information for values. While a stack machine gives you an approximation of this (deeper in the stack is farther in the future), for best results I imagine you'll want to do two passes anyway, so locals are no worse for this, other than at block boundaries, if you lack liveness you have to spill them.


SSA is a really strange form to send over a wire. It’s got poor space efficiency. It’s also annoying to interpret and not super cheap to turn into machine code.

So, I don’t see the point of sending SSA over the wire.


Author here:

I'm not advocating for an SSA register machine like LLVM, I'm just advocating for a format that makes it trivial to reconstruct SSA form on-the-fly. A pure stack machine with a statically-determinable stack depth and type at any given place in the program would give you the same information as SSA form in a more-compact way.


I keep forgetting that some people actually like the reducible control flow constraint. To me, if you have that constraint then it’s not really ssa. You literally can’t represent everything that llvm IR or B3 IR could. I think you should make sure you add that caveat when making equivalence claims.

That said, the key reason for my push back is the suitability of SSA for fast backends. If you can afford to run some coalescing then SSA is at least perf-neutral. If you can’t then compiling from SSA will result in crappy code. As in, probably worse than block local RA.

So your best bet is to somehow avoid having to coalesce. But that probably means using SSA only for extracting liveness and then running the world’s dumbest linear scan. Even that may not be as good as block local RA.


Agree. SSA requires deconstruction, which would slow down a streaming compiler.


As someone in the midst of building a game in C for 7 platforms, with WebAssembly being one of them, my main disappointment is with the lack of coroutines (or lack of control over the stack to implement them.) It hinders how wide my engine can go since I'm limited to a fork-and-join model for splitting work across threads. Poor code generation is also another pain point, but I fully expect that to improve drastically over the coming year.

Overall, I'm pretty excited for WASM and the implications of it, but it does feel like the web has regressed in the ability to deliver games.


Most common coroutine implementations, such as JavaScript's and Python's, are delimited or "symmetric". This means the most obvious implementation is in terms of compiler transformations in the source language. It seems out of WASM's scope to do this.

Undelimited "asymmetric" coroutines, like Lua's, could be an interesting addition. That still seems to me to be too high level a feature for a "portable assembly language" specification though.


I think you might be conflating characteristics regarding delimited and symmetric coroutines. But let's step back.

JavaScript's and Python's choice of coroutine styles was constrained and effectively dictated by runtime limitations. CPython, V8, and similar implementations mix their C and assembly callstacks with their logical language callstacks. Because the host runtimes didn't readily support multiple stacks without a complete rewrite, this bled into the language runtime. There was a path dependency whereby early implementation choices directed the evolution of the language semantics.

WASM is recapitulating the same cycle. Which is understandable because time is limited and you can't make the perfect the enemy of the good, but you still have to recognize it for what it is--a vicious cycle of short sightedness. If WASM doesn't provide multiple stacks as a primitive resource, then things like stackful coroutines, fibers, etc, will have to be emulated (at incredible cost, given WASM's other constraints regarding control flow). And if they have to be emulated they'll be slow, which means languages will continue avoiding them.


I agree that CPython and V8 omitting the ability to juggle multiple stacks is a mistake. For higher-level languages, undelimited coroutines or continuations allow for very useful abstractions like Go's and Erlang's transparently non-blocking IO.

However, it doesn't seem to be _entirely_ an implementation detail. Some developers just don't seem to like the semantics of called functions being able to cooperatively yield without the caller explicitly opting into it with a keyword like `await`. I disagree with them, but it's a legitimate complaint I've a few times.

It reminds me of arguments in the Lisp community about delimited continuations and undelimited, i.e. Common Lisp and Scheme. A lot of the arguments there are really about semantics and not implementation details, and come to the same point: should cooperative scheduling require explicit notation at each level of the call stack?

My view on this is that systems languages like C and Rust should require explicit notation for it whereas application languages should not. This seems to be a point in favour of Go and Erlang over Java and C#.

However WASM, similar to C or Rust, seems to target a level in the tech stack at which it should concern itself only with abstractions that have relatively direct translations to the instruction set of the underlying hardware. Support for multiple stacks doesn't fit into this from what I can see. (A similar argument can be made for WASM not supporting garbage collection too, although it looks like that'll be added at some point to make interoperability with JS smoother.)

With the JVM supposedly adding fibres soon, it poses a question for WASM: is it trying to be a portable assembly language, a portable high-level language runtime, or something in between?


The inability to implement coroutines is addressed in the second article in the series here http://troubles.md/posts/why-do-we-need-the-relooper-algorit...


The author states:

>"This means that you have overhead associated with compilation - knowing the liveness of variables is extremely important for generating efficient assembly, but instead of the liveness being calculated when creating the IR and stored as a part of it you have to recalculate this data every time."

Can someone say what is involved in calculating "liveliness"? What is the procedure for doing so?


You iterate over the program (every individual function, really) backwards. A use of a variable means that it is "live" before that point; a definition (i.e., a write into) a variable means that it is "dead" before that point. That is, at any point, a variable being "live" means that its value at that point may be used in the future. Liveness is especially important for register allocation: If two variables are both live at some program point (and cannot be proved to have the same value), the compiler must place them in different registers or stack slots.

As an aside, liveness is also useful for some other things. For example, a variable that is live at the start of a function is one that may be used without being initialized, and the compiler can emit a warning for it.

https://en.wikipedia.org/wiki/Live_variable_analysis

Edit: BTW, it really is "liveness", not "liveliness".


One thing I dislike about how criticism of WebAssembly are formulated is that often they refer to the MVP as final product. Tail calls are important but not essential, many functional languages have a C runtime; the point is if they (or an equivalent alternative) can be added properly or if the standard is not flexible enough.


The issue is that if the VM doesn't support tail calls or alternatives like unstructured goto then you're going to end up with two layers of emulation for such languages, rather than one. That's incredibly slow. Which is all fine and dandy, but people need to realize that WASM will not be nearly as performant as claimed, particularly when hosting other language runtimes.


Me too, it is as if people stop learning how compilers are implemented.


In Part 1, the author claims that WebAssembly is not a stack machine.

In Part 2, when discussing `goto`, he says, "WebAssembly is a stack machine."

Part 2 contains no explanation about the contradiction with Part 1.


Huh? Web assembly is a stack machine and locals do not pose a problem whatsoever.

Yes, the author just needs more work to be done. But it's perfectly doable, although it has a complexity. Like everything in the world of compilers. There are no free lunches.


Yes compilers are hard, but why make them harder? There is literally no reason to require rebuilding this information. The compiler emitting Wasm has that information and already uses it, having locals and disallowing blocks to take/return values actually means more complexity in both the compilers generating Wasm and the runtimes generating native code from Wasm. That's the entire premise of the article.


Even if carrying that information was the thing that you needed (I don’t think it is but there’s a separate thread about that), it’s definitely not the thing that other implementations need.


"For the most part it’s an extremely well-designed specification. However, they are weighed down by WebAssembly’s legacy"

Really?

Wikipedia: "In March 2017, the design of the minimum viable product was declared to be finished and the preview phase ended." https://en.m.wikipedia.org/wiki/WebAssembly

It was only announced in 2015.

I'm not taking sides here, but either it's well designed, or getting weighed down by legacy in 2 (or 4) years.


Immediately following that 'legacy' sentence is an explanation of what they mean by it:

> WebAssembly started out not as a bytecode, but more like a simplified binary representation for asm.js. Essentially it was originally designed to be source code, like JavaScript.

WebAssembly itself is relatively new, but it wasn't a completely blank sheet of paper that they were starting with when they designed it.


Yes and the next sentence finishes:

"and only at the last minute did it switch to stack-based encoding for the operators"

Which kind of counts against it being well designed.

To me, "weighed down" by legacy suggests some deep problem that shouldn't be manifesting in something so young. You could argue that 2 years is a long time in tech, I wouldn't say it's a long time in language development though.

Maybe I'm just arguing semantics here? Is a library for a particular language weighed down by legacy because it's designed to run on one particular language?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: