In addition to this video overview by Andrew Adams from the Stanford course [1], there's also an MSR talk given by Jonathan Kelley [2], and there may also be a Google TechTalk somewhere on YouTube (don't remember for sure so just search for "Halide" or by one of the authors' names).
And of course there's also the videos of lectures of the MIT 6.815/6.865 course on computational photography given by Frédo Durand (those videos are linked to from the Halide home page under the Getting Started section [3]).
[2] Decoupling Algorithms from the Organization of Computation for High-Performance Graphics & Imaging (MSR talk given by Jonathan Kelley) [video] https://www.youtube.com/watch?v=dnFccCGvT90
The funny thing is that YT itself has been using Halide to process all videos, perhaps as early as when that one was uploaded.
And anyone using Google Camera has been running Halide code, from Pixel phones all the way back to Glass, which took decent photos despite having hardware that, due to the form factor, was less powerful than the Galaxy Nexus that came out 1.5 years before.
Halide seems like it might have immense potential for fairly general applications in scientific computing, but all of the documentation, tutorials and examples seem dominated by image processing. Could someone elaborate on the limitations of Halide with respect to applications like finite difference and finite element methods, lattice Boltzmann simulations, etc.? These are all ultimately reducible to basic linear algebra and stencil kernels, which seem well suited to the design of Halide. Should computational scientists give it a serious look, or am I missing something?
Halide is generally good for pipelines that do math on multidimensional arrays, so quite possibly. It has a property that I think would make it hard to use for some applications in scientific computing though - while you can mutate in-place, you can't mutate an array ("Func" in Halide) after it has been used by another pipeline stage. This means that the dependency graph between your pipeline stages has no cycles except for self-cycles, which lets us do lots of clever stuff like bounds inference to guarantee the program is correct regardless of the schedule. I can imagine things like flow simulation having a hard time with that constraint. In the deep learning context this means that CNNs are ideal, but RNNs can be awkward or impossible.
Andrew: have you considered adding something akin to Rust's lifetimes here? While you'd have restrictions on the schedule, it seems like it's still let you express who in the "graph" messes with the data as tightly as possible.
Among other optimizations and advancements, Halide provides compiler support for advanced latency-hiding scheduling/optimizations beyond the capabilities of generic compilers, which essentially enables you to generate highly-parallel optimized CPU/GPU compute kernels to make memory accesses free.
It works but still needs a bit of cleanup. I was planning to get back to it after an upcoming conference deadline. Here's the PR: https://github.com/halide/Halide/pull/3220
please stop doing this, it is sad to see fires being set up on threads which belong to Halide. TVM benefit a lot from its Halide ancestry, and we are deeply grateful for that.
I personally made mistake initially, which I deeply regretted, for not adding a clear citation to Halide in some of our codebase where we reused useful hacks introduced in Halide codebase. We fixed that issue a year ago.
The two projects now have very different design priorities and focus. We as a community should not hijack the thread that belongs to Halide.
Engineering and design contain tradeoffs, and one is better than other is quite subjective and must put into the context.
We would certainly welcome healthy discussions, where I would be more than happy to talk about the different design choices we made and why some only makes sense for one project but not another due to different goals. That would better happen at in issues, forums though, but not an hn thread
As ever, it's unclear what "substantially more advanced" means, and to cherry-pick just some of the biggest users, there's a large and growing amount of Halide code in Android, Photoshop, and numerous Google services. I'll let billions of devices and tens of thousands of servers speak for themselves.
You again? The only other time your account has posted was to plug TVM in a different post on Halide. If you're not a Tianqi Chen sockpuppet, then you should apologize to him for making him look like an asshole.
For the benefit of others not aware: TVM took Halide, deleted the half of it they didn't understand, then reimplemented it (copy-pasting large chunks without credit, hacks and all) and claimed it was a revolutionary new deep learning IR. People who already knew Halide were left scratching their heads unable to tell how this wasn't just Halide re-marketed in a different community.
Halide doesn't currently have good ways to handle sparse matrices, unless they have some sort of special structure that lets you pack them into rectangular shapes (e.g. a constant number of non-zeros entries per row). It would be an interesting language extension. The big thing to add would be a way to reduce over the non-zero entries more efficiently than checking them all.
If the creator of that page is reading this, please fix the color of the code example text. Right now it's a dark shade of gray that is hard to distinguish from the background until syntax highlighting kicks in. While you're at it, maybe figure out why it takes almost ten seconds for the syntax highlighting to kick in.
Sorry, poor choice in the default Bootstrap style we adopted years ago. Under normal conditions the styles kick in in a second or less, but I've fixed it nonetheless.
Here is a relevant video from the creator. It has some fun illustrations to help with what its going on.
/edit - letters