Hacker News new | past | comments | ask | show | jobs | submit login

> But now you have an AI that solves it for you.

and then the AI explains how it was solved, step by step. You can repeatedly ask it to clarify, and it has endless patience and won't get bored or sick of you.

Eventually, you understand how the equation is solved. No human teacher would be able to achieve this.




> No human teacher would be able to achieve this.

This immediately reminds me of the scene in Terminator 2 where Sarah Connor is reflecting on how the terminator, reprogrammed to aid John Connor, is the best father figure he’s ever had.

https://www.youtube.com/watch?v=tksN5Jaan9E

It would never leave him, and it would never hurt him. Never shout at him or get drunk and hit him or say he was too busy to spend time with him. It would always be there, and it would die to protect him.


Wow this video is making me want to go and re-watch Terminator 2.

Amazing. what a beautiful scene, words, everything. And it's objectively true.


It some areas superhuman performance is sadly not that high of a bar...


Plus it has detailed files.


That requires a young student who is really self-motivated to keep asking it to clarify things. Plenty of teachers, parents, and tutors are willing to keep trying new ways of explaining things to kids until they understand it. The kid giving up is way more common than the adult without someone pushing him to keep trying or coming back to it regularly. Obviously, not every child has someone in his life that cares enough to do that, but I can't see AI as anything other than a subpar substitute.


I guess the difference is that there is a social dynamic at play between a student and an instructor. The student may tire of some aspect of the relationship before they tire of learning the material.

For example, unless the instructor has some kind of Jedi master level self control they may begin to show frustration at being asked the same question repeatedly or with minor variations.


For many the alternative would be no substitute at all.


if the student is not motivated at all, they will be in the same situation whether there's an AI or not around.

If they choose to use an AI to "cheat" and not learn, then it is no different than them using any other methods to not learn (such as truancy).


Exactly. In school I couldn't understand the Heine-Borel theorem and a professor actually got annoyed at me asking about it even though it was just one time. When ChatGPT was first released one of the first things I did was ask it to explain it to me. Even the 3.0 model was pretty good at it!


I must admit, I am quite envious of students with these tools. I wish I had this during my studies.

Certainly as my parent's generation must have been with the internet and computers for me.

I'm not sure what to do with this envy. Though, it has made me want to have a child even more, just to see first hand how they could develop so much faster and smarter than me.


What I also love is that you can ask it to explain with varying degrees of simplicity.

You can say, ELI5 (like the Explain Like I'm 5 years old Subreddit), or explain it like a child, or explain it in very simple English (which is like Simple English wikipedia).

It is actually going to be the greatest learning tool, but only for people that genuinely want to learn.


>Eventually, you understand how the equation is solved. No human teacher would be able to achieve this.

One of the things I have hugely enjoyed about LLMs is asking the same question again and again until I finally get why it is the way it is.

No human could deal with "I don't get it, try explaining it another way..." after the 10th time.


A human teacher can see the problem from the students perspective and understand the error they make and why they make it. No current ai would be able to achieve this. But alas, few human teachers can or will take the time to do this per student. Infinite time and patience really are the ai's superpower here IMO.


> A human teacher can see the problem from the students perspective ... No current ai would be able to achieve this

Not all teachers are able to see the POV of the student. In fact, i would argue that only exceptionally good teachers are able to.

Whether AI can achieve this level of understanding is still yet to be seen, but i would hope it is possible.


But sometimes it makes up answers that are wrong but sound plausible to you. You have no way to tell when it does this, and neither does it.


Neither can you easily notice that when a human teacher/tutor does that. At least with math, you're able to try it yourself and see if it works. I've had many cases where following through on a misleading explanation by a teacher/book actually ended up leading to me better retaining the topic.


Or when you tell it you think the answer is something else it agrees with you and apologies that yes, now it can see, 2+2 does equal 5.


That's absolutely a bug that needs to be fixed, but I think it's possible. Maybe have the network which generates the answer be moderated by another network that assesses the truthiness of it.

It's just a matter of priorities for the company designing the models.


It might be possible, but nobody knows for sure, because these models are rather more mysterious than their architecture suggests.

> Maybe have the network which generates the answer be moderated by another network that assesses the truthiness of it.

Like a GAN? Sometimes you can do that, but it seems not always.

If this was simple and obvious, they'd have done it as soon as the first one was interesting-but-wrong.


Especially in a limited domain like grade school math, it seems entirely plausible that we can have models in very short order that ~never hallucinate. There's no external dependencies and the problem space is extremely well-defined and constrained. Much, much, much easier than making something like Chat-GPT never hallucinate


That is the entire purpose of LLMOps. Provide guardrails to prevent hallucination and ensure precise control of GenAI output.


How can you tell what's true or not?


You have to develop your own QA methods to ensure output is exactly what you want.


> no human teacher would be able to achieve this.

They would, but only if you were rich enough to afford a personal tutor. There's plenty of scientific evidence that students who have one learn a lot faster than those stuck in a classroom. With AI, you can probably get most of the benefits of a personal tutor for a fraction of the price.


There are already multiple websites that will solve math problems that the user submits and then walk through the individual steps one by one. I'm pretty sure the solutions are based on mathematics and not a trained AI.


> Eventually, you understand how the equation is solved. No human teacher would be able to achieve this.

Not true, but only available at great expense.


a technicality that is irrelevant to most people.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: