Hacker News new | past | comments | ask | show | jobs | submit login

> I think I will continue using it with WASM polyfills

This is the answer. You can also build your own application-specific codecs this way.

I've been exploring a variation of jpeg/mpeg that uses arithmetic coding and larger block sizes. Virtually all of the fun patented stuff we couldn't use in the early 00's is in the public domain now.




The only thing worse than an image that won't display without javascript is an image that doesn't even exist without webassembly. I get that google has removed support so you're looking for other options but maybe consider putting up an announcement on your site(s) that google chrome is not supported instead of making it worse for all other browsers.


> I get that google has removed support so you're looking for other options but maybe consider putting up an announcement on your site(s) that google chrome is not supported instead of making it worse for all other browsers.

I have no horse in the JPEG XL race. I am not even necessarily focused on images. I see value in using WASM (and/or JS) for application-specific codecs. That is all.

None of my decisions have any ability to make things "worse" for other browsers, especially when those browser vendors never intended to support my application-specific codec to begin with.


>making it worse for all other browsers.

What other browsers? Desktop Firefox users who changed a flag in about:config? That's practically nobody.


Also, chrome(/ium) has like 70% market share. Not supporting it is pretty much the equivalent of shooting yourself in the foot.


If you're only making websites for profit, yes. If you're a human person making websites for other humans then only targeting the standards breaking Chrome is bad for you and them. Gotta be the change you want to see in the world. Even if you know everyone else is doing the wrong thing for profit.


One of the main benefits of JPEG XL is a superior progressive decoding and polyfills won't be never able to replicate that feature.


Why do you think that polyfills can't implement progressive decoding?

It's simply a matter of using <canvas> for progressive rendering.


Canvas has to retain all pixels unconditionally, unlike native images that can be unloaded from memory as needed. It is technically possible to implement all other features (using service workers or CSS Houdini), but tons of limitations apply and can easily outstrip supposed benefits.


Can't you just read the stream and emit the images(s) as they get downloaded to be rendered in a canvas - exactly as a native implementation would do?


My comment was apparently too concise to give you a sense of complications. I can think of those concrete problems:

- Rendering images from a different origin without CORS. This is a fundamental limitation of any JS or WebAssembly solution and can't be fixed. Thankfully this use case is relatively rare.

- Not all approaches can provide a seamless upgrade. For example if you replace all `<img src="foo.jxl">` with a canvas DOM will be changed and anything expecting the element to be HTMLImageElement will break. Likewise, CSS Painting API [1] (a relevant part of CSS Houdini) requires you to explicitly write `paint(foo)` everywhere. The only seamless solution will be therefore a service worker, but it can't introduce any new image format; it can only convert to natively supported formats. And browsers currently don't have a "raw" image format for this purpose. JXL.js [2] for example had to use JPEG as a delivery format because other formats were too slow, as I've been told.

- It is very hard to check if a certain image is visible or not, and react accordingly. This is what I intended to imply by saying that canvas has to unconditionally retain all pixels, because if implementations can't decide if it's safe to unload images, they can't do so and memory will contain invisible images in the form of canvases. Browsers do have a ground truth and so can safely unload currently invisible images from memory when the memory pressure is high.

[1] https://developer.mozilla.org/en-US/docs/Web/API/CSS_Paintin...

[2] https://github.com/niutech/jxl.js


> Not all approaches can provide a seamless upgrade. For example if you replace all `<img src="foo.jxl">` with a canvas DOM will be changed and anything expecting the element to be HTMLImageElement will break. Likewise, CSS Painting API [1] (a relevant part of CSS Houdini) requires you to explicitly write `paint(foo)` everywhere. The only seamless solution will be therefore a service worker, but it can't introduce any new image format; it can only convert to natively supported formats. And browsers currently don't have a "raw" image format for this purpose. JXL.js [2] for example had to use JPEG as a delivery format because other formats were too slow, as I've been told.

You can get around many these compatibility issues by creating a custom element that inherits from HTMLImageElement. This provides API compatibility. For CSS compatibility, the elements you would replace in a MutationObserver would be the same tag name but a different namespace for CSS compatibility.

For the CSS compatibility trick, see https://eligrey.com/demos/hotlink.js/ which replaces images with CSS-compatible (not HTMLImageElement-compatible) iframes.

> - It is very hard to check if a certain image is visible or not, and react accordingly.

You can use Element.checkVisibility()¹ and the contentvisibilityautostatechanged event²𝄒³ to do this. Browser support is currently limited to Chromium-based browsers.

1. https://drafts.csswg.org/cssom-view/#dom-element-checkvisibi...

2. https://github.com/vmpstr/web-proposals/blob/main/explainers...

3. https://caniuse.com/mdn-api_element_contentvisibilityautosta...


Thank you for pointing out contentvisibilityautostatechanged, I was aware of `content-visibility` but didn't know that it has an associated event. I'm less sure about CSS compatibility, hotlink.js for example used an iframe which opens a whole can of worms.


You dont need to replace the img with canvas DOM - you can capture the canvas output as a dataURL.


Can't reply to the reply to your reply but... Blobs can be used here and avoid the inefficiency of data URL conversions


Thanks for the tip! (Not for jpegXL specifically, but ill definitely check that out and update some code where i use dataURLs instead of objectURLs accordingly)


This is exactly what I use in my JXL.js, as well as Web Workers and OfflineCanvas.


You can, but that is hugely inefficient. Any additional draw to the canvas has to generate a data URL for the image to be progressively decoded.


I've just added a config parameter to JXL.js for choosing the target image type: JPG/PNG/WebP. Keep in mind that PNGs can take a lot memory!


>It is very hard to check if a certain image is visible or not

Canvases are rectangles. The viewport is a rectangle. Checking if rectangles overlap is easy.


The image is visible != the canvas and viewport overlaps, and this is not even a good enough approximation (the image can be obscured by other layers for example). Intersection Observer v2 takes us a bit closer but the visibility in this definition (not obscured at all) doesn't fully agree with what we want (has some pixels visible, some false positives allowed).


This is what I did with JXL.js Multithread (https://github.com/niutech/jxl.js#multithread-version) - but instead of <canvas> I am pushing the blob to the <img>.


Do not do this. Your website will become very slow. Wasm does not have the hardware acceleration necessary to efficiently do codecs.


> Wasm does not have the hardware acceleration necessary to efficiently do codecs.

See: https://jsmpeg.com

and: https://jsmpeg.com/perf.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: