GPU workloads are either compute bound (floating point operations) or memory bound (bytes being transferred across memory hierarchy.)
Quantizing in general helps with the memory bottleneck but does not help in reducing computational costs, so it’s not as useful for improving performance of diffusion models, that’s what it’s saying.
Exactly. The smaller bit widths from quantization might marginally decrease the compute required for each operation, but they do not reduce the overall volume of operations. So, the effect of quantization is generally more impactful on memory use than compute.
Altair is superb. Have used it a lot and it has become my default visualization library. Works in VSCode and Jupyter Lab. The author has a great workshop video on youtube for people interested in altair. I especially like the ability to connect plots with each other so that things such as selecting a range in one plot changes the visualization in the connected plot.
One possible downside is that it embeds the entire chart data as json in the notebook itself, unless you are using server side data tooling, which is possible with additional data servers, although I have not used it, so cannot say how effective it is.
For simple plots its pretty easy to get started and you could do pretty sophisticated inter plot visualizations with it as you get better with it and understand its nuances.
Awesome to hear that you like Vega Altair. With the recent integration of VegaFusion you don’t need to embed the data in the notebook anymore and I’ve found Altair to scale quite well. Give it a shot.
Yes, on a related note Neovim just got support for WASM plugins and apparently WASM is 100% faster than Lua (neovim's default plugin language runtime) for this use case according to the author. So now plugins in any language that can be compiled to WASM are possible.
I found this very helpful when switching to nvim recently. Kudos to the author for having the nvim config on github and making videos explaining how he set it all up:
I have been using oil infused Ceramic cookware from Calphalon (Classic). The handles are made of stainless steel which stays cool despite all the cooking. Quite convenient. Works very well as a replacement for Teflon based non stick cookware I used to use earlier. It's not perfect but using a bit of butter or oil when cooking gives good results.
This is the best solution I have found so far being a long time vim user. ssh + tmux + vim is all you need. Least amount of potential problems in this setup. vim runs locally on the server where "remote development" is being done, so there is very little friction in terms of getting it to do what you'd like it to do given all the plugins you can add to it for beefing it up.
I have heard stories from coworkers about work being lost when editing is done remotely instead of locally, so never ventured that way.
I'm definitely team emacs -nw, but I use the same tmux setup. Tmux's ability to reconnect to a session after disconnect is definitely it's killer feature.
For a lot of things such as tables, lists, adding links, etc, markdown allows you to do it "inline" while typing, instead of forcing out of band operations.
If space colonization leads to mining resources from asteroids and offloading the environmental costs of such mining from earth or get us building materials strong enough to give us a space elevator or space factories sending their waste into the sun, you wouldn’t be saying this perhaps.
Quantizing in general helps with the memory bottleneck but does not help in reducing computational costs, so it’s not as useful for improving performance of diffusion models, that’s what it’s saying.