Hacker News new | past | comments | ask | show | jobs | submit login
Animated AI (animatedai.github.io)
706 points by qwertyforce 10 months ago | hide | past | favorite | 63 comments



Nicely designed. Here is another visualizer for CNNs, from research at Georgia Tech:

https://poloclub.github.io/cnn-explainer/

Another link to various visualization tools: https://github.com/ashishpatel26/Tools-to-Design-or-Visualiz...

Another one: https://playground.tensorflow.org/



Thanks for sharing! I especially like the "Understanding Hyperparameters" widget in the poloclub article. I built something similar that people might find helpful[1].

[1]: https://static.laszlokorte.de/conv2d/


Excellent, thanks for sharing. These can be such good ways to learn and internalize intuition (and of course teach others!).


Thanks for sharing this as well! Love Georgia Tech!


There used to be this video explaining each layer of a neural network visually on youtube back in the days.


Great use of colors. Initially I thought these were AI made, and that those animations are examples.

But not being so really shows the effort put into this, these are great animations.

Thanks for sharing.

* And the videos on the YT channel are really worth watching.



Hi! I'm the creator of the site. I'd just like to say thank you for posting this and for everyone that took the time to visit the site and/or my YouTube channel! Seeing people get value out of my content is what really makes the effort worthwhile.


This is a nice project, but please don't hit people with 100+ mbs of gif images without a warning


It's a page about animations, so it's obviously going to be more media heavy than your average page. And the average random website is decently large anyway.

NYT is 11mb, wapo is 22, scrolling down once on reddit is like 40mb.

100 and something mb doesn't warrant pre-warning for a page specifically promising animations.


If the videos were mp4/webm, the whole page would be about 1/10th the size. Gif format was never intended for this type of use. (and yes, thanks for pointing out several other horribly bloated websites, luckily browsing with JS disabled cuts them down to a few mb at most like they should be)


I was worried about my blog entries being too heavy. I kept them under 3MiB each.

Then I made one where I realised it really needed some rather big videos for illustration. It ended up being 43MiB and... After thinking about it for a while I decided that was fine. The few people interested in my work are unlikely to even notice the download.


What are you browsing on, a blackberry bold?


Animation doesn't have to imply huge. The animations on this page could be done with a few kilobytes of GL shader.


It should be tagged like "56k killer" like the good old days!


Hi! I'm the creator of the site. Thanks for pointing this out! The number of animations has grown over time, and I didn't realize how big the size has gotten. Do you have an idea in mind for how you'd like the site to behave? E.g., static images that play when you click/tap on them? Sections that are hidden until you expand them? Other ideas? Thanks!


134 MB to be exact.


How about you Americans stop bending over for monopolies that make you worry about such trivial things in 2023?


I think this might be less of an issue for HN browsing American nerds (who likely have great Internet connections) and more of an issue for people in developing countries or with poor network infrastructure.


I mean, you can stream netflix in rural Cambodia and not worry about data limits so I'm not so sure. Of course someone will be able to name somewhere this is a concern, but then you should be running browser extensions to manage media loading.


Actually it's the reverse of the above nowadays, many third world countries have cheaper, faster internet than much of the US.


I love the cynicism, but if we’re looking at Median speed, us is still among the best - 11th for broadband, 26th for mobile. Punching below its weight class, for sure, but not exactly worse than developing countries. The only exceptions are Thailand, Hungary, and Romania IMO - props to them!!

https://en.wikipedia.org/wiki/List_of_sovereign_states_by_In...


I'd just like to politely point out that Hungary is absolutely miles away from being a developing country, as is Romania, and these days Thailand doesn't even count either for the most part.


Very fair! I guess I’d consider them somewhat “third world”, though obviously only in the “not western Europe, the commonwealth, the US, or east Asia”. Which isn’t exactly a great metric but it’s what a lot of Americans mean by that term lol


I mean, a brunch in Budapest will set you back more than Berlin, the coffee is the same price as London etc.

It's not like, ex eastern bloc in the 90s.

Some travel to update your view on the world is in order my friend.


I might sound like an entitled westerner but I don't think I'd even open a web browser and just click around on the internet if I had a capped or billed internet connection. At least not with auto-downloading content like images and video enabled.


I keep hearing this argument, and still not sure which countries people are referring to. Maybe rural areas of US/Canada or other large countries? Just curious about the real affected number of people.


The only places without great infrastructure in 2023 are those being bombed.


Beautifully done. It reminds me of these wonderful 3D animated explainer videos https://www.youtube.com/@animagraffs.


This is really cool. Thanks! Just subbed.


I made my own animations once upon a time using manim, not as shiny but might be helpful too

https://www.jerpint.io/blog/cnn-cheatsheet/


I'm excited to see attention layers animated like this. I feel like I'm this close to grasping them.


I still haven't found "that one visualisation" that makes the attention concept in Transformers as easily understood as these CNNs.

If someone here on HN has a link to a page that has helped them get to the Eureka-point of fully grasping attention layers, feel free to share!


I found this video helpful for understanding transformers in general, but it covers attention too: https://www.youtube.com/watch?v=kWLed8o5M2Y

The short version (as I understand it) is that you use a neural network to weight pairs of inputs by their importance to each other. That lets you get rid of unimportant information while keeping what actually is important.


Hi! I'm the creator of the site. Good news: I'm currently working on animations and an explainer video on transformers and self-attention. The best way to be notified is probably to subscribe to my YouTube channel and hit the bell icon for notifications.


You mean you would be excited to see attention animations? The page presents convolutions not attention.


For interactive articles on specific ai algorithms, checkout Amazon’s mlu-explain:

https://mlu-explain.github.io/


The later slices remind me of a gifted test I failed, because I rabbit holed on dividing the shapes...


These are great! Would love it if you had a section on RNNs / Transformers. I’d even pay for it.


I've often wished the pandas documentation had animations like this. The groupby/split-apply-combine pipeline could probably be explained in one 10-sec clip.


Ah, it's the convolutions part. It's a very nice visual demonstration.

I'll subscribe on YT. Would be neat to see the other parts: attention and so on.


Very cool! Now Conv3D..

I always wonder how to present a realistic Conv3D, as with channels it is actually 4D, it is an interesting challenge.


Excellent visuals for ELI5 on complicated subjects. I especially like the use of different colors and 3D animations.


Love visualizations like this, bravo. There is a mental stickiness to animations like this for me personally.


These are sweet - does anyone know how these animations are being generated?


Pretty sure these were animated in Blender. This feels pretty doable with Geometry Nodes, and the particular look of the blurred reflections in the floor trips my “it’s Blender” sense.


You are spot-on. In one of his videos, the author mentions that he used Blender's Geometry Nodes: https://www.youtube.com/watch?v=w4kNHKcBGzA&t=190s


The author says in his videos images on web are wrong but then for the why says they are missing details mostly because they aren’t 3D. Isn’t that incomplete not wrong?


Incomplete is a strict subset of wrong. That doesn't mean it's not useful.


These are excellent! Can't wait to share them with some art folks Ive been trying to explain AI processes to and I can't wait to see more! Bravo :^)


Cool idea and I like the format. Transformer circuits and attention would be a cool thing too.


Congratulations. Great material, great site. Keep up the good work.


This is super good looking content i love also his youtube content.


Pixel Shuffle is so satifying :)


Very nice illustrations.


Shameless plug for a recent blog post of mine where I try to explain ML convolutions, https://jlebar.com/2023/9/11/convolutions.html


Not very helpful to show blocks of only one color...


Hi there, I realize that this is dumbed down "without the numbers". Is there a dumbed down "without the numbers" prerequisite I should look at? Thanks!


According to this page, scaling an image down is "AI".


The name of the site is misleading. A more accurate one might be "Animated CNN architecture diagrams". You need to know a bit about neural networks to make sense of these images. Watch the accompanying video, and compare with the images here: https://en.wikipedia.org/wiki/Convolutional_neural_network


Click through to the videos and you'll see that these are supporting animations that form part of explanations of specific algorithms.


I can imagine kindergartens using legos to teach children "And this is , children, how the multi-head attention works". Matrix algebra as used in AI is very good fit for geometric visualizations. But in the end , it doesn't explain Why it works so good or so human-like. Valuable kindergarten lesson though




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: