People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines. But I find it insanely impressive that you can specify something like:
"Provide your answer in JSON form. Reply with only the answer in JSON form and include no other commentary."
And it will do exactly that. Or tell it to explain you something "in the style of Shakespere".
I just asked it about quantum physics as Shakespere and got this (plus a lot more):
Throughout history there are moments where humans realize they're not special in a way they previously thought they were — universe doesn't revolve around us, other animals possess skills we thought were unique to us, etc.
I think what's interesting is that many types of creativity may really just be re-synthesizing "stuff we already know."
So a lot of the negative comments along the lines of, "it can't be creative because it never thinks of anything beyond its training data" don't click with me. I think synthesizing two existing concepts into some third thing is actually a form of creativity.
These nets may not learn the same way we do exactly, and they may not possess the same creative abilities as us — but there's definitely something interesting going on. I for one am taking a Beginner's Mind view of it all. It's pretty fascinating.
Just about human intelligence in general. I used to think replacing my software job was a long ways off because it is fairly intellectually challenging but chatgpt has really changed my opinion on that.
Its funny how many people will immediately poke holes in it for software development, but two years ago I could not imagine an AI could write code like chatgpt is doing now.
Free of defects is the least of the problems. Doing what a customer wants is the major one. One could argue that customers will be able to write their code alone but they'll always have something else to do and will pay somebody to do it, exactly as for all the other parts of their business. On the other side maybe one wouldn't have to know one of the current programming languages, much like we don't have to know machine code nowadays. English and a good dose of perseverance could be enough.
Sounds similar to the arguments against self-driving: "Self-driving itself isn't the challenge. It is self-driving better than humans". AFAIK this is already possible at least under certain conditions or at certain roads.
I didn't poke any holes into it. I just entered my code and asked it to do some work for me and it simply failed. The problem was a simple 50 line function I copy pasted from stackoverflow. I asked it to do something really easy.
The AI can output a lot of text but can you input a 100000 line code base into it? No you can't. You can't even input 50 lines of code that is already in the data set!
And by failure I mean somethin akin to a blue screen and not that the output was wrong, there was no output!
I wonder though could you have it create a list of comments about the function. then use the comments to generate smaller functions and then generate test data for those functions then build unit test for those functions then have it build something to solve the original problem with the smaller functions along with test data.
You don't even have to do that last step and you'll be doing more debugging than is necessary to do that. Just do the elephant carpaccio yourself then feed it the slices and glue the shit it excretes out together and boom you have MVP.
Yeah I now routinely ask it for boilerplate to get things done way faster and it's REALLY good at flawlessly translating from one programming language to another. It'll be a shame when it goes behind a paywall.
On that thought, does anyone even still use Copilot?
I wish the negative comments were right, but from these models it’s clear that the next step in evolution, AGI is something many of us will experience at some time in the future.
I would stop that kind of evolution, as it can be catastrophic, but I know that humanity is not able to stop itself from evolving further.
I don’t think anyone had demonstrated AGI is a foregone conclusion. I’m not sure it is possible with a Turing machine. We do not think in any manner like a Turing machine or any computer ever conceived. If we do, no one has provided any evidence of such a claim. Humans can make complex insights with hardly any training and on very few calories.
How do you figure regarding hardly any training? Humans are constantly training on a never ending stream of sensory information from the time their brains form in the womb, not to mention whatever subconscious and conscious processes are reconciling that data with memory, or whatever training has been built into our minds over eons of evolution.
An 18 year old will have been training for ~160,000 hours on a volume of raw data that is probably far beyond our ability to currently store let alone train an AI with.
As far as training for a specific task, all that training on other matters kicks in to help the human learn or accomplish a “novel” task more rapidly, for example, knowing how to read and interpret the instructions for that task, knowing how to move your appendages and the expected consequences of your physical interactions with a material object. You’re certainly not taking a fetus with a blank slate and getting it to accomplish much at all.
It absolutely is hardly any training for the types of problems humans have solved. Ramanujan did not need to consume the entire corpus of mathematics or even had much formal education whatsoever. He did so, again, on very few calories compared to pre-trained AI models. I am not claiming learning how to pick up a rock does not help you in any way to understand continued fractions but it is pretty unreasonably to compare an AI as has been demonstrated as being anywhere near the same sort of thing.
Sure, we can go with the other option that humans will stay the most intelligent species on Earth until the solar system dies, history just shows that it's quite improbable.
Where does history show that? Are you serious? Humans are an outlier on Earth for which there is no other that has anywhere remotely the same level of intelligence. You may think that is aggrandizing humans but it is categorically true. You would need to provide sone world up-ending evidence to prove the contrary.
If you are also saying AIs will be a more intelligent species, able to adapt better on Earth than humans, that requires extraordinary evidence. A human could solve complex problems no other species it machine can solve on nothing more than a handful of rice for a week. Where is the similar scale species/machine?
> I think what's interesting is that many types of creativity may really just be re-synthesizing "stuff we already know."
This is obviously true, and yet we've invented so many things. From the wheel, to control of fire, to farming and animal husbandry, to mathematics, to metallurgy, to physics, to semiconductors, etc.
The interesting question is, was the invention of all those things simply the re-synthesis of "stuff we already know?" If the answer is yes, doesn't that mean we're now on the cusp of a something akin to a singularity? We can now synthesize nearly-unlimited streams of coherent human thought. If we had a way of differentiating the wheat from the chaff, we could analyze what would have been millennia of human output in the proverbial blink of an eye. If human knowledge is just "stuff we already know", then we better buckle up. It's about to be a wild ride.
> If we had a way of differentiating the wheat from the chaff
This is the key to AGI. We need verification systems, they can be a code execution environment, a database of facts, a math symbolic engine, a physical simulation, a game, or real world lab experiments. These verifiers will produce signal that can be used by the language models to improve. The cheaper and faster verification is, the faster we can iterate. Generating ideas is cheap, proof matters.
Just remember AlphaZero a bit - it started from scratch, playing against itself, in a few hours it surpassed human level. Go simulation and verification is trivial. The board is just a matrix. So learning from massive search and verification is a proven path to super-human level.
Proof definitely matters. But at this point, as ChatGPT, AlphaZero, and others demonstrate, NNs can solve any problem provided you can express the problem as a differentiable function and get enough training data to train the function. We may be very close to a breakthrough where we can train models that detect sound, good ideas. And 100% accuracy likely isn’t necessary. Even pruning the search space for good ideas by a large amount would make humans way more productive.
Especially since there are very few people that can actually be 100% creative. Most human creativity is derivative. I didn't invent a new language I learned English.
If you think about it the most creative is random noise. then you apply rules it it to constrain the noise or limit the predictions. And add more rules in layers. so getting the system to produce more creativity is just relaxing the rules at different layers.
I keep describing it as the Enterprise Ship’s Computer. It won’t answer “how do I solve this problem?” But it’ll help you workshop a solution if you do the “thinking.”
…But I’ve also had it clearly tell me in an answer that 2 is an odd number.
But if you actually read Shakespeare, this is nothing like it. Every example I have seen of someone trying to make ChatGPT sound like Shakespeare, it just spits out this generic puff not anything like Shakespeare. Whether you thin Shakespeare is good or not doesn’t, it doesn’t match anything similar to the complexity, word choice, or rhythm of the prose.
It can still generate an on the fly poem about quantum physics that actually rhymes, which is impressive enough to me.
This was a lengthy poem that went into the finer details of quantum physics, and it all read as decent poetry.
It might not be "Shakespere" exactly, but it knew in general what I was looking for.
It's also impressive to me that this isn't information that is scraped off some website, like Google. It's actually generating this stuff in real-time and it may have never been seen before, anywhere, ever. That is far beyond anything I've ever encountered.
It has seen very similar concepts and it has seen a very large amount of information about poetry and quantum physics. The issue is if it was thinking then it would realize it is just generic rhyming and not like Shakespeare, that is my point.
I feel like we’re now arguing not over whether submarines can swim, but over whether the stroke that they’re doing would really count as good technique or not.
This analogy doesn't make any sense. Obviously submarines don't do any stroke at all. In this case, ChatGPT is being told to deliberately replicate a particular style of poetry. For what it's worth, its poetry is bad, and is nothing like Shapespeare. If you don't have an ear for poetry and wouldn't recognize the work of major poets, if you ask ChatGPT to replicate a particular style then you aren't actually in a good position to assess whether it did so particularly well.
Well, I mean a human English professor could mimic Shakespeare to a greater or lesser degree and this doesn’t in any sense other than it rhymes and sometimes deals with same/similar subject matter. A lot of ink has been spilled about this signaling the coming AGI or the end of knowledge work but there has been precious little critical discussion about the horizon of this type of technology. The technology is clear very powerful and advanced but I don’t think there is a problem with those asking for breathless claims to be substantiated.
Agreed, both ChatGPT and DALL-E feel significantly different in their ability to at least simulate “understanding.” They aren’t perfect by any means, but they’re a big step up from anything I’ve seen before.
The X in Y format is really one of its strengths. I asked for “A truth table for three valued logic in Markdown” and got something totally usable which I could then tweak.
> People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines.
Really? This seems like a straw man - I've only seen gobs and gobs of examples showing all the amazing things ChatGPT can do. I have seen some measured comments from real experts helping to explain how ChatGPT works behind the scenes, and this is usually to temper sentiments when folks start going down the "It's sentient!!" route.
I asked it if it wished it were alive and it assured me it is incapable of wishing for anything because it's just an AI, so I think we're safe for now. :)
> If it's a lesser form of intelligence, as you imply
Different != lesser
> Probably, but
Then why even continue speaking if you agree
> our concepts seem to be tied pretty closely to language.
Me: “ChatGPT, explain how an electron orbiting a hydrogen nucleus would behave if classical mechanics held true.”
ChatGPT: “In classical electromagnetism, an electron orbiting a hydrogen nucleus would behave in a similar way to the way a planet orbits the sun, as described in my previous response. The electron would follow an elliptical path around the nucleus, and the force of attraction between the electron and the nucleus would be given by Coulomb's Law.
According to Coulomb's Law, the force of attraction between two charged particles is given by:
F = k * (q1 * q2) / r^2
Where F is the force of attraction, k is the Coulomb constant, q1 and q2 are the charges of the two particles, and r is the distance between them.
In the case of an electron orbiting a hydrogen nucleus, the force of attraction would be the force that keeps the electron in orbit. The electron would be attracted towards the nucleus by this force, but at the same time, it would be moving with a certain velocity, which would cause it to follow a curved path. This path would be an elliptical orbit, with the nucleus at one of the foci of the ellipse.
The electron's orbit would be stable as long as the force of attraction between the electron and the nucleus was strong enough to keep the electron in orbit, but not so strong that it pulled the electron into the nucleus. This balance would be determined by the size of the ellipse and the velocity of the electron. If the ellipse were too small, the electron would be pulled into the nucleus. If the ellipse were too large, the electron would escape from the orbit.
However, we know that this is not actually how an electron behaves in an atom. In reality, electrons do not follow classical orbits around the nucleus. Instead, they follow a more complex pattern described by quantum mechanics. This is because the principles of quantum mechanics are needed to accurately describe the behavior of very small particles like electrons.”
On the surface this looks impressive. Except that the answer about stable orbits is flat out wrong. The electron would spiral into the nucleus, quite quickly, because of Maxwell’s equations: a moving electron radiates electromagnetic energy. This is part of the foundational defense of quantum mechanics. If as you claim, language and concept are so closely tied, surely a language model with billions (trillions?) of parameters is capable of encoding a relational understanding of this magnitude.
> If it’s a lesser form of intelligence I almost find the results more impressive.
You get that you’re saying the least common denominator in conversational intelligence makes you go bzzzzz? That the one thing ChatGPT lacks, critical thinking, does not impress you?
You know, in the 90s, they had these things called tamagotchis… you might be interested.
There will always be naysayers stuck in the old way of doing things. Don’t let em get in your head and keep your eyes full of wonder. Incredible things are still ahead.
People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines. But I find it insanely impressive that you can specify something like:
"Provide your answer in JSON form. Reply with only the answer in JSON form and include no other commentary."
And it will do exactly that. Or tell it to explain you something "in the style of Shakespere".
I just asked it about quantum physics as Shakespere and got this (plus a lot more):
---
Oh sweet youth, listen closely as I impart
The secrets of the quantum realm, a place of art
Where particles and waves, both small and large
Exist in states both definite and in charge
---
That is really fascinating stuff.