Hacker News new | past | comments | ask | show | jobs | submit | arriu's comments login

Blender is incredible, harfbuzz equally so, pretty much all modern browsers do the impossible so they need to be included as well.


Interesting, where can I read more?


Try (best title ever) “Slow Death by Rubber Duck” by Rick Smith and Bruce Lourie. Excellent book, although they frame their enquiry a little too much like “Supersize Me” and I'm not sure it was the best way.


The subtitle "The Secret Danger of Everyday Things" really ties the room together.


Awesome! Thanks for putting this together.

Is there any way to get non-processed image data? Without the nighttime adjustments?


You can grab the source data from EUMETSAT directly - the URLs are all listed in https://github.com/matteason/live-cloud-maps/blob/3be662483c...

The products used are:

"Geostationary Ring IR10.8 μm Image - Multimission" - https://navigator.eumetsat.int/product/EO:EUM:DAT:0330

"Geostationary Ring Dust RGB - Multimission" - https://navigator.eumetsat.int/product/EO:EUM:DAT:0334

"Geostationary Ring Natural Colour RGB - Multimission" https://navigator.eumetsat.int/product/EO:EUM:DAT:0336


Having done both web and gl ux for a living, I think you might be overestimating the complexity of a gl implementation and underestimating the complexity of meeting the same specs using a web implementation.

While not quite the same thing, if you have the time, dip your toes into some immediate mode UI, for example imgui. It is enjoyable not a grind.


Oh yeah, sure my friend. You're going to implement

  * text entry (right to left as well as left to right)
  * file browser
  * treeviews that scale to 50k nodes or 10x that
  * menus
in GL in a month. Not believable.


Never did they ever say that would implement the UI of Blender (I assume that's what you were referring to) using OpenGL. I think what they meant was that implementing just a UI in OpenGL isn't as hard as the other guy thought


Do you think wasm has any chance of replacing the whole html/css stuff with just webgpu in a canvas? I have been playing a bit around with wgpu in rust and I can compile the same project either as a native binary or a .js that just renders to the browser. It seems to work pretty cool. Photoshop seems to be runable in the browser now, and I've seen a lot of other cool stuff, but things like fluid simulations seem to still be very laggy.


Rendering everything into a canvas will realistically mean total lack of accessibility features. Also you won't be able to use the DOM inspector. I don't think that would be an improvement at all. If one could work with the DOM via Wasm (plus source maps so you can still use the debugger) it might be something.


> webgpu in a canvas

Internally that's what Chrome/Firefox are, they render much of html/css on the GPU using OpenGL/DirectX/Vulkan/Metal.

> Do you think wasm has any chance of replacing the whole html/css stuff with just

No, you also need the DOM to enable frameworks like React which are used by a large number of sites.


Are amd GPUs still to be avoided or are they workable at this point?


The cuda happy path is very polished and works reliably. The amdgpu happy path fights you a little but basically works. I think the amd libraries starting to be packaged under Linux is a big deal.

If you don't want to follow the happy path, on Nvidia you get to beg them to maybe support your use case in future. On amdgpu, you get the option to build it yourself, where almost all the pieces are open source and pliable. The driver ships in Linux. The userspace is on GitHub. It's only GPU firmware which is an opaque blob at present, and that's arguably equivalent to not being able to easily modify the silicon.


AMD GPUs work great, the issue is that people don't want to mess with ROCm/HIP when CUDA is kind of the documented workflow. Along with the fact that ROCm was stagnant for a long time. AMD missed the first AI wave, but are now committed to making ROCm into the best it can be.

The other problem is that there aren't any places to rent the high end AMD AI/ML GPUs, like the MI250's and soon to be released MI300's. They are only available on things like the Frontier super computer, which few developers have access to. "regular" developers are stuck without easy access to this equipment.

I'm working on the later problem. I'd like to create more of a flywheel effect. Get more developers interested in AMD by enabling them to inexpensively rent and do development on them, which will create more demand. @gmail if you'd like to be an early adopter.


At that rate I’d run my own model


As a Canadian, I can confirm 12 months is not enough let alone 6 weeks… I am deeply saddened at the thought of mothers having to go back after so little time


18 months doesn’t seem like much either


Government should pay me 100% of wages for 18 years!


Why do you think that was an acceptable comment to make? Read the room.


Isn’t that the ideal? Not letting work interfere with raising your kid?


Im sure I’m not the only person thinking 24 years is way too long for such an important advancement.

Huge respect for those in this field or others that don’t give up after so many years. Thank you


24 years is nothing in the scale of the universe

If it increases entropy as much as many suspect and it only took 1/3 of a couple humans' lives to open that phase space, the Universe has done what it wanted - to hasten heat-death.


Interesting angle — but if heat death were possible it would have happened by now.


Huh? Heat death of the universe is going to take an incomprehensibly large amount of time. Like, 10^106 years.

The universe is A LOT younger than that.


As Hawking once explained, “Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory, and say that time began at the Big Bang.” When cosmologists talk about the universe and its age, it seems to me, as a non-cosmologist, that they’re using terms of art related to their models.

Hawking’s explanation deduces that if the observable universe expanded from a singularity, we would be unable to meaningfully theorize what happened before then, since it would be beyond any form of observation to test the theory. Therefore, a scientific model rooted in observation can describe nothing earlier than the Big Bang.

However, not everything unseen is untrue. If a singularity were to form somewhere in Andromeda tomorrow — in all likelihood, one will — we will still have existed today.

Edit: The initial comment was meant as a lighthearted reply to the universe personification, but I ended up sensing a need to explain the reasoning.


It's not "personification", it's the universe tending toward entropy increasing overall. I don't think I've heard anyone claim that heat death "should have happened" as an argument against it, or what it's supposed to mean in reference to the original post.

There is a singularity in Andromeda, so I don't know why one forming matters


Please read up on heat death before continuing to share baseless information.

https://en.wikipedia.org/wiki/Heat_death_of_the_universe


One thing I wish there was better support for is live debugging and stepping through code.


You know about VS Code and GoLand?


I absolutely love the integration in VSCode with the official Go extension. I can debug a running web server with delve with minimal config. Same for tests. Just experiment with the options, there are quite a lot, and unfortunately some not very well documented like gopls ones, at least last time i checked.


This is definitely the nicest all in one that I’ve seen yet. I guess there is a market for this somewhere but I don’t get who.

At 45k though, they really should have used a mount that doesn’t suffer from field rotation on longer exposures.

Also, the tube seems long, would have went with a more compact design OTA and a wider aperture.

I’m happy to see the full frame sensor with 16bit ADC, seems no expense was spared there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: