Hacker News new | past | comments | ask | show | jobs | submit login

This is awesome.

Thank you for sharing.

My ancient ruby code and nodejs code all broke because I didn't pin dependencies. As a result I've got software that is unrunnable.

More software shall be unrunnable as time goes on, I don't know many trends that prevent software from being unbuildable and unrunnable due to change except maybe repeatable builds and hermetic builds.

Given platform toolchains complexity and libc versions and complexity of static Vs dynamic linking, I suspect preserving software is very difficult.

It seems doing + - ×÷ on numbers is not the difficult part of computers but arranging information into the right places in order to do it.

Logistics and package management are difficult to get right.

I think Java got something right. Bytecode is longlasting. Can rewrite the JVM for new platforms and architectures.

I am writing my own language and it is implemented as an assembly interpreter and a compiler for that interpreter. This lets me get development speedier.

What the hard thing I think is more interesting than bytecode or virtual machines is INTEROP.

The Amd64 SysV Binary interface of registers for C calling interface and the System call Interface of Linux.

Mozilla abandoned XPCOM extensions, part of the reason was performance of the interop between JavaScript and C++.

If I could run a virtual machine and interop with modern code that would mean the software was useable for longer.




The JVM is an interesting use case.

Indeed, at a compiled level, byte code is mostly long lasting.

I haven’t fired up a 10 year jar file recently but it would not surprise me if it Just Worked.

The success of that is twofold. First is simply that whatever changes are being made to the JVM, they’re mostly forward looking and don’t deprecate running code.

The other is that the conventional packaging mechanic is, essentially, a static binary. A “fat jar” is the term of art, with all of the dependencies bundled in.

But there’s still potential problems. They’ve been removing large subsystems from the JDK as of late. XML, web services, Java FX are poster examples. So legacy binaries depending on those will fail outright.

These can be added back to the Java runtime, but still “one more thing”.

Of course from the source code side, Java suffers dependency hell and code rot along with the best of them. Network based dependencies up and vanish. Long standing projects may not publish 10 year old jar files any more.

Also, Java has had other clods dropped into its churn. Oracle shutting down the java.net website was a huge sudden black hole in the community consciousness of Java. Overnight thousands of articles, blog posts, forum entries, and other artifacts vanished like Keyser Soze. Leaving behind a debris field of dead links across the internet.

So, to be fair, the JVM is a boon. I really like Java, and it’s still going strong. The VM architecture and the comprehensive nature of the Java runtime has made the moving of running code across systems much easier. As someone with an enterprise Java background, used to deploying WAR and EAR files, I got to mostly avoid the entire Docker and other such family of infrastructure. Install a JDK, install an App Server, all fairly trivial to isolate, and the system is ready to go.

But in the large, it takes more than a VM to get things accomplished. There’s always an eco-system at play.

And one primary characteristic of eco-systems is they evolve. Time marches on, and waits for no one.


Thank you for your thoughtful and interesting comment.

I too like Java a lot and think it's a great technology.

I can see how a runtime for a compiler (such as the JVM) doesn't need to change much over time: if the compiler produces the right code in 2000, then the code is probably still right in 2020, it just could use more features of the ISA that were introduced since then.

On the other hand, the design of platform, library code and framework code is an ecosystem that is desperate to transform over time as better approaches to solving technical and business problems are found.


> My ancient ruby code and nodejs code all broke because I didn't pin dependencies. As a result I've got software that is unrunnable.

You can simply remove the ^ symbol from your versions listed in package.json and it will use exactly that version you originally added.


Besides the issue of whether you pin or don't pin your dependencies, the problem is that node packages can depend on external native code. You can have several more layers of dependencies in there. If, for any reason, those native packages won't install/run on your machine, your dependencies can still break under you, even if you pin them. Python and Ruby have the same vulnerability when it comes to dependencies breaking.


That’s one of the many advantages of building for the web. It won’t ever be unsupported (for some reasonable definition of never).

You get text rendering, canvas, audio, webgl… it’s a pretty wide platform.


Except that's just not true. It's already broken/incompatible in many places across browsers. You may not notice if you're just doing basic HTML/CSS, but if you do anything slightly more dynamic, you're going to notice. An ever-expanding set of complex APIs also makes it more and more likely that bugs will go unseen and unfixed. See two examples I detailed in the blog post.


Sorry, I was reading the comments before reading your article.

Getting around pointer events inconsistencies is a lot easier than building your own cross-platform VM, of course. But the project looks awesome and seems like a great initiative.

I imagine there will also be differences in the way macOS, Linux and Windows handle graphics, IO, audio, etc, that will eventually leak to UVM, it's just the nature of the challenge.


> Getting around pointer events inconsistencies is a lot easier than building your own cross-platform VM

For sure, and I did. I wrote some code that makes an invisible fullscreen div appear just at the right time to prevent pointer events being triggered when they shouldn't be #cleancode. It's just frustrating that things like that need to be done, and how often they might need to be done.

> I imagine there will also be differences in the way macOS, Linux and Windows handle graphics, IO, audio, etc, that will eventually leak to UVM, it's just the nature of the challenge.

At the moment you can create a window with one function call, and you have another function call to copy one frame's worth of pixels into the window. The pixel format is in BGRA byte order, 32-bits per pixel, and that's the only option. I'm going with really basic, low-level APIs like that because they're harder to get wrong.

Audio is going to be equally simple. There could be cross-platform differences in things like the amount of latency to write audio, but I'll do my best to make the APIs extremely portable and hard to get wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: