webgpu hype is pretty cringe,even if you want to run in a browser (and most customers don’t) most game engines can still bake WebAssembly/WebGPU packages.
I think the main feature that's exciting for me is the gpgpu potential.
Even just looking at the ability to accelerate llms in the browser on any device without an installation is awesome
For example: fleetwood.dev has a really cool project that does audio transcription in browser on the GPU: https://whisper-turbo.com/#
Hah that was me ~12 years ago trying to get WebCL (OpenCL) through the same gate keepers. Meanwhile, in Python, we are doing multi-node multi-GPU. Maybe OpenAI's and soon Apple's success with LLMs will change the economics for them.
This is why I don't like Khronos APIs, even when actually those are the ones I know relatively well, the way they work end up being a much worse experience than writting backend specific plugins ourselves with much better tooling, also the extension spaghetti ultimately doesn't save us from multiple code paths anyway, given the differences between some of those extensions.
To pick your example, something like PyTorch ends up being a much better developer experience, similar to game engines, than relying on Khronos APIs.
I just went to your website and then visited your linkedin profile.
Respectfully, you might be struggling with a case of undiagnosed schizophrenia and bipolar disorder. Please reach out to psychiatrist - you can find them on psychologytoday, that's where i found my psych.
If you are not insured you can apply for BMR sessions.
Hey I’m sure you mean well but maybe contact the person directly rather than diagnosing them in public which could be counter productive. Again I’m sure you mean well.
Genius tells a factual truth, a schizophrenic lies and is delusional. Usually, both types are highly intellectual. To an uninitiated external observer, there may be no difference between the two because the observer may be unaware of the truth yet.
I think there are enough genuine use cases. People are saving time using AI tools. There are a lot of people in office jobs. It is a huge market. Not to say it won't overshoot. With high interest rates valuations should be less frothy anyway.
Right now they’re shoveling “potential”. LLMs demonstrate capabilities we haven’t seen before, so there’s high uncertainty about the eventual impact. The pace of progress makes it _seem_ like an LLM “killer app” could appear any day and creating a sense of FOMO.
There's also the race to "AGI" -- companies spending tens of billions on training, hoping they'll hit a major intelligence breakthrough. If they don't hit anything significant that would have been money (mostly) down the drain, but Nvidia made out like a bandit.
I can’t think of any software/service that’s grown more in terms of demand over a single year than ChatGPT (in all its incarnations, like the MS Azure one).
I don’t know what you’re talking about. I use chatGPT extensively. Probably more than 50 times a day. I am extremely excited for anything that can top the already amazing thing we have now. They have a massive paying customer base.
100%. ChatGPT is used heavily in my household (my wife and I both have paid subscriptions) and it’s absolutely worth it. One of the most interesting things for me has actually been watching my wife use it. She’s an academic in the field of education and I’ve seen her come up with so many creative uses of the technology to help with her work. I’m a power user too, but my usage, as a software engineer, is likely more predictable and typical.
- Writing: emails, documentation, marketing - Write a bunch of unstructured skeleton of information. Add a prompt about the intended audience and a purpose. Possibly ask it to add some detail.
- Coding: Especially things like "Is there a method for this in this library" - a lot quicker than browsing through documentation. Some errors - copy-paste the error from the console, maybe a little bit for context, and quite often I get the solution.
And API based:
- Support bot
- Prompt engineering of some text models that normally would require labeling, training, and evaluation for weeks or months. A couple of use cases - unstructured text as an input + prompt, JSON as an output.
A lot of very varied things so it’s hard to remember. Yesterday I used it extensively to determine what I need to buy for a chicken coop. Calculating the volume of concrete and cinder blocks needed, the type and number of bags of concrete I would need, calculating how many rolls of chicken wire I would need, calculating the number of shingles I would need, questions on techniques, and drying times for using those things, calculating how much mortar I would need for the cinderblocks (it took into account that I would mortar only on the edges, the thickness of mortar required for each joint, it accounted for the cores in the cinderblocks, it correctly determined I wouldn’t need mortar on the horizontal axis on the bottom row) etc. All of this, I could’ve done by hand, but I was able to sit and literally use my voice to determine all of this in under five minutes.
I use DALLE3 extensively for my woodworking hobby, where I ask it to come up with ideas for different pieces of furniture, and have constructed several based on those suggestions.
For work I use it to write emails, to come up with skeletons for performance reviews, look back look ahead documents, ideas for what questions to bring up during sprint reviews based on data points I provide it etc.
Not OP but I used it very successfully (not OpenAI but some wrapper solution) for technical/developer support.
Turns out a lot of people prefer talking to a bot that gives a direct answer than reading the docs.
Support workload on our Slack was reduced by 50-75% and the output is steadily improving.
I haven't yet seen any competitor come close to what we've achieved at my startup https://olympia.chat - very humanlike assistants crafted specifically for solopreneurs and bootstrapped startups