Hacker News new | past | comments | ask | show | jobs | submit login
Migrating a JavaScript Library from JavaScript to WebAssembly (q42.nl)
210 points by jasperk on Feb 10, 2021 | hide | past | favorite | 66 comments



So this is great but it might be worth pointing out that the library went from Canvas2D (slow) and ThreeJS (very general purpose) to pure WebGL calls tailored to the application, which alone probably would have been the significant driver behind the performance improvements. It’s hard to see exactly how much WASM has helped on top of that, and I wonder if a pure JS+WebGL version would perform about as well at the fraction of file size, complexity and parse speed.

Usually I wouldn’t recommend going WASM unless you have very hot CPU code paths, like physics across thousands of 3D bodies and collisions that is eating up your frame time. In the case of an image viewer I’m not sure that code is ‘hot’ enough in CPU terms to really warrant all this.

(I’d love to be proved wrong with benchmarks and find more general uses of wasm tho !)

Just my 2c. Really great write up and work nonetheless!


Thanks for the comment! I agree that the change to a custom tailored engine probably made the biggest difference performance wise. At the end of the article I also briefly mention this.

However, having a monolithic compiled Wasm module which contains all of (and only) the rendering logic is really nice on a codebase-level.

Also, the Wasm part of Micrio is being used for a lot of realtime matrix4 calculations. While I didn't use this specific library, here is a bench page which shows that Wasm really kills JS performance here: https://maierfelix.github.io/glmw/mat4/ .

So it definitely makes a difference. If I had the time I would port the Wasm engine back to JS and do more benchmarking. But that's a big if :)


I hadn't seen glmw, thanks for sharing.

I can see how WASM definitely improves performance in those benchmarks (where we're talking about thousands of matrix operations). But I imagine your per-frame camera & matrix math is probably not taking up much time (within a 16ms budget), so a lot of these WASM optimizations may not have any real influence on the rendering performance. But, they could introduce more upfront parsing/loading time for your library (i.e. base64 decoding, WASM instantiation), and obviously a bring with them a lot more complexity (in authoring the module and supporting legacy devices).

Anyways, it's pretty cool that you can call GL functions directly from WASM, I hadn't realized that before and it probably would make for an epic game engine to have almost everything called from WASM land.


Regarding the "emscripten-imposed file size problem", I've written a couple of alternative cross-platform headers which might help in this case (e.g. avoiding SDL). These enable WASM apps in the "dozens of KBytes" range, and you get native-platform support "for free":

https://github.com/floooh/sokol

Check out these samples and have a look at the size in the browser devtools which are IMHO quite sensible:

https://floooh.github.io/sokol-html5/

https://floooh.github.io/tiny8bit/

PS: really great writeup btw :)


The binaryen toolkit comes with a wasm2js tool, you could translate the wasm back to js and see how performance compares ;)

It's possible that performance isn't all that different, because asm.js-style Javascript can be surprisingly fast (compared to "idiomatic" human-written Javascript).

Otherwise it's a completely pointless excercise of course, unless you need to support browsers without WASM support (which don't exist anymore AFAIK).

[1] https://github.com/WebAssembly/binaryen


It's unnecessary with AssemblyScript due to AS is just subset of TypeScript and can transpile to ordinal idiomtic javascript. Also AS already uses binaryen under the hood and can produce asm.js via --jsFile, -j command line.


You could (fairly) easily build a reduced test case to verify this. I'd be interested in seeing the results.


Impressive work, and a very nice speedup.

Just a PSA, if you including a .wasm binary in your app, especially if it is large, be sure to request it separately as a .wasm (with MIME type "application/wasm", as your browser will cache the compiled code along with the original bytes in its HTTP cache, so you'll get lightning fast startup.


Pairing that with the use of ‘instantiateStreaming’, browsers will even start interpreting and JITing the wasm as it downloads.


Nice article. Lots of useful ideas.

It would be nice to know how much WebAssembly actually optimized here vs just writing WebGL in JavaScript. There are too many changes here to know if the speed ups and size gains are related to wasm or just to dumping three.js, and switching all the previous canvas 2D rendering to webgl rendering.

In fact (maybe I missed it), if they're using AssemblyScript is it possible to just run it through TypeScript and check?


Thank you!

Someone else in this post asked the same question and I answered briefly there. Indeed the engine rewrite itself from general purpose to specific usage probably made the biggest difference performance-wise.

However, I do use matrix4 calculations for the 360 viewer, for which Wasm definitely is better.

Interesting idea to do a followup benchmark using the AssemblyScript as TS! I might actually have to do that :D


I apologise for making a meta point, but this is a fantastic example of how a story can take a while to "click" on HN. This is its sixth submission in 4 months (none by me, before you ask) and the first to actually get anywhere :-) There is so much gold flowing through HN /newest and I strongly encourage more people to check it out if possible.


Wow this is so awesome. I would be highly interested how the energy consumption is dropped by using web assembly. If you consider that 5 billion people use smartphones and browse the web, if you could reach just 1% less cpu time on average that would have a significant impact on worldwide energy consumption...


Thank you very much!

Really interesting question. Whereas I don't have power-specific benchmarks, the general rule of thumb for power consumption is indeed the amount of CPU + GPU usage. It's an eternal battle to stay feature rich and keep smooth texture downloads, but use as little power as possible. And ofcourse don't draw any unnecessary frames.

What I managed to do with the entire Wasm rewrite operation, is definitely get the number of CPU cycles down by a lot, and shortening the pathways between raw CPU and GPU operations.

Since this article I was already able to do a lot more optimizations, because of the resulting new code architecture being so much clearer than before. Funny how more minimal structures allow for better optimizing.


Seconded: I had a blast reading this article. And I'd also be very curious for stats about power profile instead of raw speed (I'd expect them to be more or less correlated, but that might be a flawed assumption)


This is a very nice and thorough write-up. Frankly, I would've dropped IE support altogether, that browser has been dead a long long time. No need to futz with gzip and base64, just fetch the wasm, engines optimize for the common path.


IE11 usage will probably tailor off quite dramatically this fall, after Microsoft drops support for it when using their 365 product suite...

https://techcommunity.microsoft.com/t5/microsoft-365-blog/mi...


I think the bigger sandbag is the single file requirement. I understand why for this type of project and the way it is often deployed by its users that seems like a necessity, but at the same point, we're about to the point that you can assume that ES2015+ browsers also support HTTP/2+, individual file requests are no longer quite the same bottleneck they have been in HTTP/1.x.

In which case you could get away with a "loader" possibly as simple as one <script type="module" src="path/to/modern-lib"></script> and one <script>/* older browser fallback */</script>.


Internet Explorer 11 will probably be around for 25+ years, knowing Microsoft's track record in Windows.

Though I'm surprised the author continued to support IE 10. IE 10 and below are quite dead.

https://www.w3counter.com/trends

https://analytics.usa.gov/

https://analytics.usa.gov/data/live/ie.json

Edit: Amazing to see there's still IE5 usage on US Gov analytics :)


Yeah, I still encounter IE10 and older sometimes -- looking at the wide audiences Micrio projects are used for, this ranges from grandparent's Windows XP machines to OS-locked government IE10-PCs.

Can't be helped, so an automatic fallback to the previous Micrio version is the least I can do.

I do really hope that there could be a _final_ update from MS to IE10 to at least support ES6. That would also make such a big difference.

One can wish..


That's like dropping support for firefox or edge or safari

Chrome 69.28% Edge 7.75% Firefox 7.48% Internet Explorer 5.21% Safari 3.73% QQ 1.96% Sogou Explorer 1.73% Opera 1.12% Yandex 0.90% UC Browser 0.37%


IE isn't even in the top 6 in any of the regions you can click on this website: https://gs.statcounter.com/browser-market-share/all/europe.

I feel sorry for the people who continue working on IE because they think it's just as important as Edge/Safari/Firefox. :(


I have it at #4 in the desktop category basically on par with edge / firefox and almost double safari.

https://netmarketshare.com/browser-market-share.aspx?options...


Important or not, if your big corporate customers are willing to pay you big bucks to support it, how can you refuse? But yes, it does feel very counterproductive, and even more so over time, when you see that more and more of the popular libs drop support for ES5.


Wheelchair users are not even close to 1% of the population. Would you tell a person in a wheelchair to just use their legs when at your establishment?


It's not dead, it's alive and well on millions of machines still perfectly functional and usable.

There are many developers, and thus services, who are not too lazy to support it. I've been able to achieve compatibility down to IE3, and hoping to go down all the way to 1.0, as a lone developer writing a relatively complex project.

Retro-computing is growing at remarkable rates. But aside from that, there are also people out there using older devices not for the retro-computing cool, but because that's what they have.

Telling them to upgrade is like telling a wheelchair user that they need to upgrade to legs. After all, wheelchair users are probably less than 1% of the population, right?

If I were you, as a developer, I would be just a little bit ashamed and embarrassed of the cop-out attitude displayed in your comment.


> I've been able to achieve compatibility down to IE3, and hoping to go down all the way to 1.0, as a lone developer writing a relatively complex project.

You are perfectly free to spend you time as you see fit; however, you might notice that many projects are dropping support of old browsers. This frees up developer resources, helps with the writing of cleaner code — e.g. CSS grid as opposed to tables or floats; js modules rather than huge js files or complex bundlers; web components instead of imperative handling of all the update logic — and opens up new possibilities in the browser including the webassembly. This is clearly a win for developers; but it also ends up being a win for users.


Comparing IE usage to wheelchair users is disingenuous and gross. No one is physically unable to move on from Internet explorer 10. Times change, and standards do too.


>No one is physically unable to move on from Internet explorer 10.

Please rethink what you wrote here. You made an ableist and ignorant statement.

There are many, many people out there who are trying to access information resources who, for one reason or another, have no control over what browser they are using.

Corporate users. People with older devices they cannot upgrade. People who don't want to upgrade. People using public computers in libraries, shelters, and other assistance centers. People borrowing someone else's device or with a hand-me-down. People with devices they are emotionally attached to for whatever reason. The list goes on and on.

I don't know about you, but as a developer who likes to think of himself as conscientious, as a developer trying to conquer laziness and over-complexity, as a developer who thinks about users and greater good, I'm not going to consciously write all these users off just because Internet Explorer presents some challenges in writing compatible code.

If you are going down that path, please go to the bathroom, if you are privileged enough to access one, and have a long hard look at yourself in the mirror. Is that really the best you can do?


You're the one making weird ableist statements. You also list a bunch of types of people who aren't anything like disabled people like corporate users or people who don't want to upgrade.

Your holier than thou attitude really isn't welcome at Hacker News. I would prefer it if you didn't comment at all than see a reply like this. I'm so glad I don't have to work with someone like you.


It'd be interesting to get a true measure of the performance opportunities from WASM.

Every app has performance opportunities, but they usually not quite related to the raw performance of the language, usually, it's loading and backend latency.

People don't realize that V8 is really, really fast and always getting faster, and that WASM was never really super fast.

As V8 gets faster the delta opportunity for WASM is reduced at least in terms of raw, algorithmic performance.


I worked on both V8, Wasm, and Wasm in V8. The two have separate goals. V8's priority for JavaScript is to run the trillions of lines of JavaScript code found in the wild well and to support the language's evolution. It is no longer primarily about top-level throughput performance on computational kernels. Wasm has the goal of high, predictable performance for numeric-heavy, legacy-, and low-level code. Wasm is also focused on bringing more languages to the web that could reasonably be accomodated by a compile-to-JavaScript approach.


V8 is not the only implementation of WASM. Safari and Mozilla also provide implementations. Though Apple is probably dragging their feet here, Mozilla has been quite active with WASM development, supporting it in their browser, making sure Rust (which they created) can use it, and doing lots of developer support and outreach for WASM.

The word legacy is a bit biased here. From where I'm sitting, Javascript as implemented by browsers is increasingly the legacy language. Even most javascript webapps are actually transpiled to it. It's a compilation target more often than something people write natively. And as such it is used because until WASM came along, it was the only viable way to run anything on the web.

WASM provides developers a better compilation target that can ultimately support just about anything and already is used to run a wide variety of languages that have the ability to target it (in various stages of completion and maturity). Most mainstream languages at this point basically.

And of course WASM is not limited to the web. It's also showing up in edge computing solutions pushed by e.g. Digital Ocean, Cloudflare, etc. Many node.js applications can leverage WASM as well. It's probably used in a fair amount of desktop applications using electron, like VS Code for example. People are even experimenting with embedding wasm in operating system kernels and firmware even. It turns out that performance and sandboxing are very desirable properties in lots of places.

So, it's a general purpose runtime intended to sandbox and efficiently run just about any code. Including javascript interpreters ultimately. My prediction is that the latter will happen before the end of this decade and that Javascript will lose its status as a special language that is exclusively supported in browsers in addition to WASM. It will allow browser creators to focus on optimizing and securing WASM instead of WASM + Javascript.


I wholeheartedly agree with this. V8 is definitely incredible impressive and keeps improving. However Micrio uses a lot of low level Matrix4 maths, which is substantially faster in Wasm.

The second point is also great; having the base Micrio engine as a Wasm module now really makes me itch to use it in different environments. Perhaps I can use it in a serverside rendering flow, or even on an embedded device with a touch screen? Perhaps even as a native mobile app component in the future.

Also from the developer point of view it offers improvements over JS. Type/memory wise it gives more control. It's great to be able to use (u)int, f32 and f64 types, whereas with JS this is impossible from a coding perspective. The buffers passed from Wasm to WebGL are all Float32Arrays and are casted from f64's to f32's manually; giving me that 100% control instead of the black box that V8 gives.


Just for the record, the Wasm implementation in Chrome is a subsystem of V8. It's integrated into the main codebase and shares a lot of runtime services with the JavaScript support, primarily the optimizing compiler tier. So improvements to the (backend) of the optimizing compiler in V8 benefit both JavaScript and Wasm. There is also a baseline JIT compiler which is specifically built solely for faster Wasm startup and not used by JS.

The team (my former team) that maintains Wasm is a subteam of V8 and they all work together. This is unlike (P)NaCl which was both a separate team and a separate subsystem within Chrome. All of the former NaCl people work on Wasm now, primarily tooling and standards.


That's true. But wasm also required some extra codegen and optimizations for i64 types for example (also simd and atomics). In addition, wasm will have more specific CFG optimizations for WebAssembly. For example https://bugs.chromium.org/p/v8/issues/detail?id=11298


"predictable performance for numeric-heavy, legacy"

You mean all that 'legacy' C++ meant for the web?

Yes, I see what someone might mean, i.e Autocad able to re-use a lot of code, but those are small use cases.

And as for 'predictability' ... is it really predictable in application, given the amount of quasi-compilation that has to happen to get to WASM, and, more important, does it matter if it's predictable if it's slow, or rather, V8 is 'fast enough' in comparison?

The fact is, WASM is a Zombie Technology. It's odd that it continues to exist, like Bitcoin it has a few vested interests, and the 'dream' seems real, but there are just very few material uses cases, and the practical reality of doing anything functional with WASM is very small.

It's an intellectual concept, created without any specific customers/users in mind, and as such has very little adoption.

Because V8 is 'so good' and 'fit to purpose' and increasingly so (whatever stated objectives are), it means the real opportunity for WASM just fades.

Porting old C++ is a narrow case, and writing new C++ with a janky toolchain, and very limited ways to interact with networking, and UI ... all for what reason again?


I think this line from the parent is right at the spot

> Wasm has the goal of high, predictable performance for numeric-heavy, legacy-, and low-level code

The OP was porting a application that is in the class of "numeric-heavy" so he had a good profit from his endeavor, as this is a right case for it.

In the low-level class there's great cases like porting MMPEG and using it straight from the browser. Of course it will be not as usable as a native application that links to MMPEG, but it will make a lot of cool things possible.

legacy: Now for instance Adobe could port Flash to WebAssembly with probably the same result. Or we could have that sweet atari emulator right in the browser.

This is not hyped as some other point of views that see WASM as a universal IR that will take the world.

I dout a lot of Javascript and Typescript code will have major benefits into porting to WebAssembly if they dont fall into one of those three categories, so most people dont need to see it as a menace, as Javascript is still being treated as a first-class citizen in the VM's.


V8 JS is equally fast or even faster for the same code after 3-4 runs. Cold paths, however, are usually 2 orders of magnitude slower than what you can achieve with WASM, for the same code. So it kind of depends on what you're building. Do you need a <5ms response at all times or can you live with an ~150ms response sometimes? 150ms randomly blocking code can be unacceptable for some cases and WASM doesn't suffer from that.

That said, you can't simply swap certain functions for WASM. There is an overhead for calling WASM functions, though very tiny - so if you're really going for performance then your app should be entirely living in WASM and interfacing directly with native browser APIs.


So that's a good point - however, in some recent benchmarks I've run V8 vs. JS code in GraalVM JS module, it seems that V8 does move to that optimal state fairly quickly, and 3-4 runs for a specific line of code is usually a small price to pay, given the alternative would be to use an entirely different stack, which even a decade in, barely integrates with the host environment - and - one must account for the inevitable 'bridging' costs between WASM and JS domains.

For example, some kind of 'pre optimized' 'JS engine byte code' that a regular JS engine could load as a module - which could be used by any other, regular JS code, would be considerably more optimal in terms of adapting to real world needs.

Oddly, that could probably be done right now if the world just happened to agree on a JS implementation like V8, I'm not suggesting we all do that, but at least we should be aware of the price we are paying.

I give WASM a 50/50 chance of becoming relevant, some of the new APIs may make a big difference, it remains to be seen if they do.


I agree, I'd love to be able to explicitly annotate a certain function for pre-optimization even if there was some initialization cost but that would also guarantee its performance throughout the app's lifetime. I'm not sure if that can be achieved with the dynamic nature of JS. Anecdotally also, in my benchmarks what I observe is that you get optimizations only for things that run consecutively i.e in a loop. When you switch to other paths and return it treats it like a cold path all over again. So when you're switching paths very often you may never get any optimization benefits and then the only option, at least for now, is to use WASM.

Also, all of this without ever mentioning GC, which is extremely hard to avoid using vanilla JS and will certainly cause hiccups.

I also agree that tooling around WASM could be improved but we should give it some time given that it's a fairly new thing (I think it's less than a decade even if you include asm.js). I believe it will become relevant not necessarily because it can run on the web, but because of the feature that it's a sandboxed target environment that is extremely portable. WASM runs almost everywhere already and I think that's it's main winning point and not so much the browser side.


I'm curious to know more about their statement that loading textures in a WebWorker caused additional performance overhead versus loading them in the main thread. I have my own WebWorker based texture loading system and it is significantly better than not using it.

At the risk of stating the obvious, as OP seems pretty well on top of the Web API game, did they remember to mark the texture data ArrayBuffer or ImageBitmap as transferable?

Though, I suppose my metric is "number of dropped frames", not "total time to load" and I haven't actually measured the latter. My project is a WebXR project, so the user is wearing a VR headset, wherin dropped frames are the devil.


I have to admit that I didn't dive 100% into what made this difference exactly. It just worked better without webworkers.

I just dived into the git history, and it turns out I didn't use the buffers as transferable. Perhaps that was it! I'll definitely check that out later, thank you for pointing this out!


A follow-up on this one. Now textures are downloaded by WebWorkers using the transferable buffers.

Over multiple benchmark runs this results in another 32% CPU performance gain (WebWorkers vs single thread texture downloads)!

Thank you for point this out to me!


No problem. I just happen to be working on a somewhat similar project. It's strictly 360 photos and it's built around our company's foregin language instruction services, but I've had to deal with/am currently dealing with several similar issues. Indeed, the next major tech change I had planned was to also implement my own WebGL code and get rid of Three.js. I'd like to see if I could get the WebGL code running in a WebWorker with OffscreenCanvas, but I'm as-yet unsure if that will work with WebXR.


> it turns out I didn't use the buffers as transferable

Not using transferable has HUGE performance hits, it can take even seconds to transfer larger buffers.


Great achievement. One remark though, it sounds like the author does a lot of assumptions that something is better without actually benchmarking before and after. On those kind of projects, one should measure the impact of each step. Maybe the new version is only faster because it uses WebGL, maybe the WASM code is actually slower... Or is it the opposite?

In my youth, I did a lot of x86 assembly programming. It's very easy to end up with a code slower that compiled high level languages. Here's an example: aligning memory buffers made a piece of code 50% faster (the bottleneck was memory bandwidth). That's a sort of optimization a compiler might (or might not) do for you. With ASM languages you have the control, so you're responsible for doing it.

Michael Abrash's Black Book is a bible in term of approach to software optimization. It's old but a nice read. Out of print, a free ebook is maintained here: https://github.com/jagregory/abrash-black-book


I agree, a lot of assumptions to why things are faster. As far as I know WASM is not necessarily faster than native JS if you write the JS code properly (typed arrays, don't generate garbage, object pooling, etc.).


Nicely written article and very interesting read!

> To be sure that there was as little background noise as possible, I ran all tests after a fresh reboot, making sure no unnecessary background processes, such as Windows Update, were running.

This made me laugh =)

In a more relevant note, probably worth pointing out to use production builds and running the browser in safe mode or with a clean profile (i.e. without any interfering extensions).


True fun fact: the author, Marcel Duin, got married yesterday!


Amazing write-up, thank you!

For the initial benchmark: > Over the 2 minute measured timespan, there was only 14% less CPU usage than with 2.9, tested over a number of trials. > Also, the red dots in the timeline at the top indicate blocked frames, or frame skips.

In terms of power usage, 14% less CPU while instead using GPU for rendering probably leads to an actual higher power usage. Also, if the frame was skipped/blocked while waiting for the GPU, this can also explain the less CPU usage (CPU was idle as it was waiting for the GPU). My point is, this was not necessarily an improvement, especially considering that frame skips are the worst that can happen.

I think a more fair comparison would have been to compare WebGL in JS vs WebGL through AssemblyScript, not Canvas2D vs WebGL in AssemblyScript, as now parts of the improvements come from moving computation from CPU to GPU, not necessarily from using WebAssembly. This is mentioned in the conclusion: "Are the performance gains of the new version fully attributable to WebAssembly? Most likely not.".

> 65% less CPU used than in 2.9

This is amazing. It would be good to also check the GPU usage delta.

> Don't use unnecessary WebWorkers

The thing about WebWorkers is that you have to be really careful with memory transfer, and use transferable objects only to avoid memory copying.

It would be also interested to see if using OffscreenCanvas can lead to better performance for this high-res image use-case.

> I could gzip them before base64-encoding them

This was a clever solution but doesn't the extra JS parsing (for the new library) and CPU usage (for decoding the file) take more time than the original solution?


Thank you for your points!

As I said in the article about the benchmarking-- I definitely did it the "quick and dirty" way, testing the whole application 2.9 vs 3.0 on just my device-- not testing specifically for difference in power usage, GPU usage, etc. I would love to have time/resources enough to more microtests as you describe.

Micrio used OffscreenCanvas for a long time. Turns out (apart from occasional flickering screens in Safari) while it did great for Canvas2D operations, it didn't seem to make a difference for WebGL. It actually adds more rendering steps, since you're basically rendering to a framebuffer first.

As someone else pointed out below, with loading the textures using WebWorkers, I indeed didn't use the transferable objects, so it was basically copying a lot of buffers, explaining the performance hits. I'll definitely be experimenting with that.

About the gzipping solution; I've just run a benchmark. It takes 14ms to gunzip & parse the JS base64 to ES6 string, and 59ms to run it inside the `new Function()`-parser. Comparing that to 65% CPU saved per frame drawn (not to mention the general V8 ES6 optimizations compared to the previous ES5 JS-version), I think the cost is worth it :)


This article is different to other WebAssembly articles in that it also talks about the gotchas. I was going to comment "use webgl" but that also seem to be covered.


I'm looking forward to the day when WebAssembly modules have DOM access and we can import them via script tags


Tangential question, what does the !! operator do in:

  const supported = !! self.WebAssembly;


It's just the not "!" operator twice.

The first NOT operator (from right to left) casts the following operand to the boolean primitve type and negates the value.

The second NOT operator reverses the truth value of the boolean, in order to regain the original truthiness.

!self.WebAssembly; -> false if self.WebAssembly is truthy (eg. object exists), true otherwise

!!self.WebAssembly; = !(!self.WebAssembly;) = !(false|true) -> true if WebAssembly is defined, false otherwise.


coerces the object to a boolean


Of course! :)


! Is the operator but it is used twice.


> the Wasm-functions you call will return immediately

That can't be right. Doesn't JS block on the call, like any old function? The wasm function returns a value, not a promise/thenable. So if the wasm function takes a long time/forever, it will not "return immediately".


I think by "immediately" he specifically meant "synchronously". He contrasted with web workers which run in separate threads, which means that they send messages back in an asynchronous promise.


They're expressing surprise that, unlike their experience with WebWorkers, you get to interface with Wasm/JS directly rather than through an async boundary.


Great write up of the issues in migration to wasm.

It should give insights into anyone contemplating thoughts about replacing JavaScript with X technology.

JavaScript is the most popular and deployed full stack platform. It simply works with web workflows and development patterns. All the issues with JavaScript are trivial and new features and frameworks are available to overcome most issues.


> All the issues with JavaScript are trivial

If the problems with JavaScript are trivial, why are the solutions such as webpack, babel, and typescript so complicated. Many companies have whole teams of people dedicated to maintaining solutions these problems.

> and new features and frameworks are available to overcome most issues.

Producing JavaScript apps that are both fast to run and easy to maintain does not feel like a problem that has been solved yet.


> If the problems with JavaScript are trivial, why are the solutions such as webpack, babel, and typescript so complicated. Many companies have whole teams of people dedicated to maintaining solutions these problems.

Because building user-facing clients that run on a bunch of moving-target user machines instead of one big server has fundamental differences and challenges that afflicts all client development, no matter which language you're using nor the platform you're targeting, nor have centralized proprietary platforms that work on a single line of phones (Android, iOS) fared much better than the opposite, disjointed, organic approach of volunteers (the web).

Why? Because it's hard, and hard for everyone.

> Producing JavaScript apps that are both fast to run and easy to maintain does not feel like a problem that has been solved yet.

You say that like you think anyone has solved it. How to build UIs in a world of constantly changing consumer tech is still an open question because it's hard, and it will always be. Apple is still straddling spaghetti code KVO systems from the 70s, and their new-fangled SwiftUI platform is still so buggy that the last three places I held an iOS contract with were clinging to UIKit.


> Producing JavaScript apps that are both fast to run and easy to maintain does not feel like a problem that has been solved yet.

If the language is not part of the solution, maybe it's part of the problem?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: