The amount of work done by these 7 words is incredible. ChatGPT is far from changing a "lot" of industries. Still pathetic at programming (which isn't its purpose, but is AlphaCode's purpose; which also sucks), use of it for copy writing is nullified since it seems Google will crack down on AI generated copy writing. DALL-E is also a nice party trick but far from being particularly useful.
I'd certainly love to have a useful AI but I think we're experiencing a 80-20 rule situation right now, and that it'll be a few years before we see anything that makes significant improvements on current solutions.
It boggles my mind that people are now dismissive towards technology that would have been literal science fiction _a year ago_ while at the same time being pessimistic about future progress.
I think you’re exaggerating quite a bit. Language models have been evolving for years.
The problem I have with GPT is that is wonderful at confidently writing things that are completely incorrect. It works wonderfully at generating fluff.
I’m not a pessimist I love this kind of thing. I just understand the delta between impressive demo and real useful product. It’s why self driving still isn’t pervasive in our lives after being right around the corner for the better part of a decade.
Maybe you think it's a revolution but older folk see that as an evolution.
It remind me of the alice bot hype of my youth, which in retrospect was just an evolution fromthe ELIZA hype of 1966.
https://en.m.wikipedia.org/wiki/ELIZA
We are actually far from sci-fi where is my flying delorean and clean fusion energy for humankind? An as far as AI is concerned where is HAL 9000?
This once again represents a sentence which omits a lot of important details.
This happened in an experimental setting, not an actual production setting. It was a net positive energy output when ONLY accounting for the energy input of the actual lasers, not when accounting for the mechanisms which fired the lasers (which had an energy efficiency of about 1%, although this efficiency could be higher if they used more advanced laser generator/whatnots). It was generated in a way that in no way resembles what current attempts at a production ready, maintainable, fusion reactor look like (tokamaks), and was instead, as stated before, essentially a design meant for experiments where fusion occurred (basically by shooting a pellet of fusion material into the central focus of a bunch of powerful lasers).
The LLNL is, and always has been, a experimental laboratory meant for primarily nuclear weapons testing and maintenance, and as such, have the ability to test nuclear fusion (via this inertial confinement setup), as fusion occurs in thermonuclear bombs, of which the US certainly has many in its stockpile.
This test, while a big "milestone", is the equivalent of building a specialized fuel efficient vehicle which gets 500 miles to the gallon by sacrificing almost everything that makes a car a car, and then equating that as to say that every car on the road will be getting 500 miles to the gallon any day now. When in fact, the only thing achieved was the ability to say that we've made a car get 500 miles to the gallon.
Just going back and look at GPT-2's output, it's amazing how much better this system is. It still doesn't "understand" anything, but the coherence of what it spits out has gone up drastically.
Buddy, it’s not a demo, it’s a warning of what’s to come.
Stay behind, it makes no difference to anyone but yourself. As for me, I have integrated ChatGPT into my daily work. I have used it to write emails that negotiated a 30k usd deal, write stories, prototype an app, send a legal threat, brainstorm name and branding ideas, scope a potential market and this is just some of the stuff I used for actual productive work.
I can’t begin to tell you how much I have played with it for fun and intellectual curiosity.
No. "AI" isn't creating new information complexity. (In fact it's making the world simpler, by regurgitating smooth-sounding statistically average statements.)
Information complexity is the true test of intelligence, and the current crop of "AI" is actually making computing dumber, not smarter.
But yes, "dumber" is often more useful. But the industries "AI" will revolutionize are the kinds of industries where "dumber" is more profitable (e.g., copywriting spam, internet pornography, casual games, etc.) so the world will be poorer for it.
The first iPhone forever changed how people use and perceive smartphones as well as how they are built. It only sold 6 million units over the course of 13 months, an average of 15k units / day.
I too can pull up completely irrelevant statistics.
I'm not sure I understand. TapWaterBandit asked for a fairly specific example of something and I gave one. Could you elaborate on what your disagreement is?
i want to see the actual use cases for these less than perfect AIs. only recently they've become useful enough to actually assist with coding, which is indeed impressive, but what else can they really do?
they can answer questions, yes, but it's tough to tell if it's telling the truth or making stuff up which is kind of a problem. code at least compiles or doesn't so its easy to verify.
The amount of work done by these 7 words is incredible. ChatGPT is far from changing a "lot" of industries. Still pathetic at programming (which isn't its purpose, but is AlphaCode's purpose; which also sucks), use of it for copy writing is nullified since it seems Google will crack down on AI generated copy writing. DALL-E is also a nice party trick but far from being particularly useful.
I'd certainly love to have a useful AI but I think we're experiencing a 80-20 rule situation right now, and that it'll be a few years before we see anything that makes significant improvements on current solutions.