Hacker News new | past | comments | ask | show | jobs | submit login
20x Faster Background Removal in the Browser Using ONNX Runtime with WebGPU (img.ly)
165 points by buss_jan 82 days ago | hide | past | favorite | 31 comments



Background Removal can be thought of as Foreground Segmentation, inverted. That is no trivial feat; my undergraduate thesis was on segmentation, but using only “mechanical” approaches, no NNs, etc), hence my appreciation!

But here’s something I don’t understand: (And someone please correct me if I’m wrong!) - now I do understand that NNs are to software what FPGAs are to hardware, and the ability to pick any node and mess with it (delete, clone, more connections, less connections, link weights, swap-out the activation functions, etc) means they’re perfect for evolutionary-algorithms that mutate, spawn, and cull these NNs until they solve some problem (e.g. playing Super Mario on a NES (props to Tom7) or in this case, photo background segmentation.

…now, assuming the analogy to FPGAs still holds, with NNs being an incredibly inefficient way to encode and execute steps in a data-processing pipeline (but very efficient at evolving that pipeline) - doesn’t it then mean that whatever process is encoded in the NN, it should both be possible to represent in some more efficient representation (I.e. computer program code, even if it’s highly parallelised) and that “compiling” it down is essential for performance? And if so, then why are models/systems like this being kept in NN form?

(I look forward to revisiting this post a decade from now and musing at my current misconceptions)


For many tasks that neural networks can solve, there are traditional algorithms that are more compact (lines of source code vs size of neural network parameters), but they are not always faster and often produce results of lower quality. For a fair comparison, you have to compare the quality of result together with the computation time, which is not straightforward since those are two competing goals. That being said, neural networks perform quite well for two reasons:

1. They can produce approximate solutions which are often good enough in practice and faster than exact algorithmic solutions.

2. Neural networks benefit from billions of dollars of research into how to make them run faster, so even if they technically require more TFLOPs to compute, they are still faster than traditional algorithms that are not extremely well optimized.

Lastly, development time is also important. It is much easier to train a neural network on some large dataset than to come up with an algorithm that works for all kinds of edge cases. To be fair, neural networks might fail catastrophically when they encounter data that they have not been trained on, but maybe it is possible to collect more training data for this specific case.

I have not discussed any methods to compress and simplify already trained models here (model distillation, quantization, pruning, low-rank approximation, and probably many more that I've forgotten), but they all tip the scales in favor of neural networks.


"Neural networks are the second best way of doing just about anything." ~ John Denker

It's an old quote that, although not 100% accurate anymore, still sums up my feelings quite nicely.


There is some work to convert NNs to decision trees.

https://towardsdatascience.com/neural-networks-as-decision-t...

https://arxiv.org/abs/2210.05189

I haven't reviewed any of it, I only know of it tangentially.

https://www.semanticscholar.org/paper/Converting-A-Trained-N...

Distilling a Neural Network Into a Soft Decision Tree https://arxiv.org/abs/1711.09784

GradTree: Learning Axis-Aligned Decision Trees with Gradient Descent https://arxiv.org/abs/2305.03515


NNs are, in a way, already "compiled". If all you want to do is inference (forward pass), then you mostly do a lot of matrix multiplications. It's the training pass that requires building up extra scaffolding to track gradients and such.

It occurred to me that NNs ("AI") are indeed a bit like crypto, in the sense that both attempt to substitute compute for some human quality. Proof of Work and associated ideas try to substitute compute for trust[0]. Solving problems by feeding tons of data into a DNN is substituting compute for understanding. Specifically, for our understanding of the problem being solved.

It's neat we can just throw compute at a problem to solve it well, but we then end up with a magic black box that's even less comprehensible than the problem at hand.

It also occurs to me that stochastic gradient descent is better than evolutionary programming because it's to evolution what closed-form analytical solutions are to running a simulation of interacting bodies - if you can get away with a formula that gives you what the simulation is trying to approximate, you're better off with the formula. So in this sense, perhaps it's worth to try harder to take a step back and reverse-engineer the problems solved by DNNs, try to gain that more theoretical understanding, because as fun as brute-forcing a solution is, analytical solutions are better.

--

[0] - Which I consider bad for reasons discussed many time before; it's not where I want to go with this comment.


Neural networks are not trained with evolutionary algorithms because they are very slow, especially for the millions or billions of parameters that NNs have. Instead, stochastic gradient descent is used for training, which is much more efficient.


> doesn’t it then mean that whatever process is encoded in the NN, it should both be possible to represent in some more efficient representation...?

Not if NNs are complex systems[1] whose useful behavior is emergent[2] and therefore non-reductive[3]. In fact, my belief is that if NNs and therefore also LLMs aren't these things, they can never be the basis for true AI.[4]

---

[1] https://en.wikipedia.org/wiki/Complex_system

[2] https://en.wikipedia.org/wiki/Emergence

[3] https://en.wikipedia.org/wiki/Reductionism, https://www.encyclopedia.com/humanities/encyclopedias-almana..., https://academic.oup.com/edited-volume/34519/chapter-abstrac...

[4] Though being these things doesn't guarantee that they can be the basis for true AI either. It's a minimum requirement.


Worth noting that background removal is built in to Preview on Macos.


It’s also built into Safari and Photos on all their platforms and available as an API that can be called by any app

https://developer.apple.com/wwdc23/10176


Huh, I've been copying background-removed subjects out of Preview and didn't realize there's a VisionKit API. Looks like it's quite easy to use too, I put together a quick and dirty script in a couple minutes and it worked wonderfully:

  import AppKit
  import VisionKit
  
  @main
  struct Script {
    static func main() async {
      let image = NSImage(contentsOfFile: "input.heic")!
      let view = ImageAnalysisOverlayView()
      let analyzer = ImageAnalyzer()
      let configuration = ImageAnalyzer.Configuration(.visualLookUp)
      let analysis = try! await analyzer.analyze(image, orientation: .up, configuration: configuration)
      view.analysis = analysis
      let subjects = await view.subjects
      for (index, subject) in subjects.enumerated() {
        let subjectImage = try! await subject.image
        let pngData = NSBitmapImageRep(data: subjectImage.tiffRepresentation!)!.representation(
          using: .png, properties: [:])
        try! pngData?.write(to: URL(fileURLWithPath: "subject-\(index).png"))
        print("subject-\(index).png")
      }
    }
  }


Was searching for an equivalent for Linux, came across rembg.

https://github.com/danielgatis/rembg


"Therefore, the first run of the network will take ~300 ms and consecutive runs will be ~100 ms"

I only skimmed the article, but I don't think they mention the size of the image. 100ms is not that impressive when you consider that you need to be three times as fast for acceptable video frame rate.


> I only skimmed the article, but I don't think they mention the size of the image. 100ms is not that impressive when you consider that you need to be three times as fast for acceptable video frame rate.

You don't need three times as fast for acceptable video frame rates in a video editor, you need a system that allows you to cache "rendered" frames so when the user does an edit, it renders to this cache, then once done, the user can play it back in real-time.

This is essentially how all video editors handle edits on clips/video today. Some effects/edits can be applied in real-time, but the more advanced one (I'd say background removal being one of them) works with this type of caching system.


As long as one uses a Chrome distribution.

WebGPU is at least one year away of becoming usable for cross browser deployment.


> WebGPU is at least one year away of becoming usable for cross browser deployment.

In Firefox it seems to be behind a feature flag and Safari seems to have it in it's "Technology Preview" (some sort of release candidate?), so seems closer that I at least though.


Firefox has had it as feature flag for at least one year now, Safari just announced the technology preview during WWDC updates.

WebGL 2.0 took almost a decade to be fully supported, and still has issues on Safari, don't expect WebGPU to be any faster.

Also note that Google is the culprit why WebGL Compute did not happen, WebGPU was going to sort all problems, and even though they use DirectX on Windows, apparently it was a big issue to use Metal Compute on Apple instead of OpenGL, and then they ended up improving Angle on top of Metal anyway.

Web politics.


Onnx is cool, the other option is tensorflow js which I have found quite nice as a usable matrix lib for JS with shockingly good perf.would love to know how well they compare


Also shout out to Taichi and GPU.js for alternatives in this space. I've also had success with Hamster.js, that 'parallelizes' computations using Web workers instead of the GPU (who knows, in the future the two might be combined?).


"who knows, in the future the two might be combined?"

You can combine both today alreay and I experiment with it. The problem is still the high latency of the GPU. It takes ages, to get an answer and the timing is not consistent. That makes all scheduling for me a nightmare, when dividing jobs between the CPU and GPU. It would probably require a new hardware architectur, to make use of that in a sane way, so that GPU and CPU are more closely connected. (there are some designs aiming for this, as far as I know)

edit: you probably meant hamsterS.js


They are probably two different use cases. Parallelizing with web workers could be faster for algorithms that do a lot of branching (minimax comes to mind) but if you can vectorize (matmuls for example) then GPU probably dominates.


It would be cool to implement some of these in either library to see how they stack up. In the Hamster.js case, I am envisioning each worker having access to a seperate GPU on your local machine..and having results come in asynchronously on the main thread. Massive in-browser simulations with access to existing JS visualisation packages would make real-time prototyping more feasible.


Interesting, there’s also node version in /packages.


Tried it, and it's absolutely half-baked. Doesn't accept its own config typed param, messes up with own internal urls, cannot run from non-project dir.

Although the segmentation quality is much better than that of `rembg`, the interface to it is just unamazing. Update: nope, it's sharper, but fails at different images at the same rate.

gist: https://gist.github.com/sou-long/5c7cfee57f5399918c9072552af... (adapted from a real project, just for reference)


MS teams does this already, right? (I assume they do, as it didn't work in Firefox until recently)

Or do they do it server side?


I'm pretty sure they do it client side. The latency on your video preview is non existent.


[flagged]


this sounds like an LLM for sure.


Yes, ONNX Runtime with WebGPU for removing backgrounds in web browsers is a required step forward in web-based image processing. It's a fast engine for running machine learning models trained with ONNX. And because WebGPU is designed for web-based graphics and compute tasks, its functionality is similar to that of native GPU programming.

It's handel the complex image processing tasks like background removal to happen quickly and in real-time. This approach can deliver up to a 20x speed boost compared to traditional CPU methods. It attracts user interactions for sure.

Developers convert machine learning models trained in tasks like semantic segmentation to ONNX format using ONNX. These models run efficiently in browsers with ONNX Runtime and accelerated WebGPU computations. This integration democratizes access to advanced image processing capabilities that were previously limited to native apps, and now they're available with standard web technologies.


If I run it in a browser on my client, why going to a website in the first place?


to resolve a short url to a piece of software i guess


This novel concept should have a catchy name…


why should i save software to disk when i can run it anytime i want in the browser? :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: