Hacker News new | past | comments | ask | show | jobs | submit login

I got to see that yesterday with my daughter: her algebra book used PV=nRT as an example of joint proportion and asked some questions about it. She, having never seen the ideal gas law, was quite thrown by it.

That said, after we went through it and had a brief physics lesson, it worked quite well and I'm glad they used the example instead of just making something up -- but it required having a tutor (me) on hand to help make the context make sense.




Now imagine you were reading a mathematics textbook, and they use an example from something you really have no interest in (which for me would be finance). It includes terminology you have to learn - terminology that has no bearing on the math.

As discouraging it is to learn math without context, it's even more discouraging to learn it in a context you hate.


I have no interest in finance but I liked this book:

https://nononsense.gumroad.com/l/physicsfromfinance


> required having a tutor (me) on hand to help make the context make sense.

I wonder how good ChatGPT would be as that tutor. You can ask it to “explain like I’m 12”


It's almost unbelievably good as a tutor. Still, you need to check everything it tells you. Treat verification as part of the lesson.


ChatGPT is an unbelievably bad tutor if what you want a tutorial about is even a little bit obscure (e.g. the answer you want isn't already included in Wikipedia). It just confidently states vaguely plausible sounding made up nonsense, and then when you ask it if it was mistaken it shamelessly makes up different total nonsense, and if you ask it for sources it makes up non-existent sources, and then when you look whatever it was up for yourself you have to spend 2x as long as you originally would have chasing down wrong paths and trying to understand exactly which parts ChatGPT was wrong about (most of them).

And that's assuming you are a very savvy and media literate inquirer with plenty of domain expertise.

In cases where the answer you want was already easily findable, ChatGPT still is wrong about a lot of it, and you could have more easily gotten a (mostly) correct answer by looking at standard sources, or if you want to be more careful tracking down their actually existing cited sources or doing a skim search through the academic literature.

If you ask it something in a topic you are not already an expert about, or if you are e.g. an ordinary high school or college student, you are almost certainly coming away from the conversation with serious misconceptions.


ChatGPT is an unbelievably bad tutor if what you want a tutorial about is even a little bit obscure

That has absolutely not been my experience at all. It's brought me up to speed in areas from ML to advanced DSP that I'd been struggling with for a long time.

How long has it been since you used it, and what did you ask it?


> It's brought me up to speed in areas from ML to advanced DSP that I'd been struggling with for a long time.

Are you sure it did? Or did it just convince you that you understood it?


If the code I wrote based on my newly-acquired insight works, which it does, that's good enough for me.

Beyond that, there seems to be some kind of religious war in play on this topic, about which I have no opinion... at least, none that would be welcomed here.


ML and DSP are both areas where buggy code seems to work, but actually gives suboptimal performance / wrong results. See: https://karpathy.github.io/2019/04/25/recipe/#2-neural-net-t...

> The “possible error surface” is large, logical (as opposed to syntactic), and very tricky to unit test. For example, perhaps you forgot to flip your labels when you left-right flipped the image during data augmentation. Your net can still (shockingly) work pretty well because your network can internally learn to detect flipped images and then it left-right flips its predictions. Or maybe your autoregressive model accidentally takes the thing it’s trying to predict as an input due to an off-by-one bug. Or you tried to clip your gradients but instead […]

> Therefore, your misconfigured neural net will throw exceptions only if you’re lucky; Most of the time it will train but silently work a bit worse.


Actually Karpathy is a good example to cite. I took a few months off last year and went through his "Zero to hero" videos among other things, following along to reimplement his examples in C++ as an introductory learning exercise. I spent a lot of time going back and forth with ChatGPT to understand various aspects of backpropagation through operations including matmuls and softmax. I ended up well ahead of where I would otherwise have been, starting out as a rank noob.

Look: again, this is some kind of religious thing where a lot of people with vested interests (e.g., professors) are trying to plug the proverbial dyke. Just how much water there is on the other side remains to be seen. But finding ways to trip up a language model by challenging its math skills isn't the flex a lot of you folks think it is... and when you discourage students from taking advantage of every tool available to them, you aren't doing them the favor you think you are. AI got a hell of a lot smarter over the past few years, along with many people who have found ways to use it effectively. Did you?

With regard to being fooled by buggy code or being satisfied with mistaken understanding, you don't know me from Adam, but if you did you'd give me a little more credit than that.


I'm not a professor and I don't have any vested interest in ChatGPT being good or bad. It just isn't currently useful for me, so I don't use it. In my experience so far it's basically always a waste of my time, but I haven't really put in that much work to find places where it isn't.

It's not a religious thing. If it suddenly becomes significantly better at answering nontrivial questions and stops confidently making up nonsense, I might use it more.


> following along to reimplement his examples in C++ as an introductory learning exercise.

Okay, that? That's not what people are usually doing when they say they used ChatGPT as a tutor. It sounds more like you used it as a rubber duck.


When the duck talks back, you sit up and listen. Or at least, I do.


You are obviously experienced and have knowledge of advanced abstract topics.

For you using ChatGPT as a NLP and flawed search mechanism is fine and even more efficient than some alternatives.

Advocating that it would be just as useful and manageable by inexperienced young students with far less context in their minds is disingenuous at best.


I have tried asking it all sorts of questions about specific obscure word etymologies and translations, obscure people's biographies (ancient and modern), historical events, organizations, academic citations, mathematical definitions and theorems, physical experiments, old machines, native plants, chemical reactions, diseases, engineering methods, ..., and it almost invariably flubs every question I throw at it, sometimes subtly and sometimes quite dramatically, often making up abject nonsense out of whole cloth. As a result I don't bother too much; I've found it to waste more time than it saves. To be fair, the kinds of questions I would want a tool like this to answer are usually ones I would have to spend some time and effort hunting to answer properly, and I'm pretty fast and effective at finding information.

I haven't tried asking too much about questions that I could trivially answer some other way. If what you want to know can be found in any intro undergrad textbook or standard dictionary (or Wikipedia), it's plausible that it would be better able to parrot back more or less the correct thing. But again, I haven't done much of this, preferring to just get hold of the relevant dictionary or textbook and read it directly.

I'll give you an example. I just now asked chatgpt.com what Lexell's theorem is and it says this:

> Lexell's theorem is a result in geometry related to spherical triangles. Named after the mathematician Michel Léonard Jean Leclerc, known as Lexell, it states: ¶ In a spherical triangle, if the sum of the angles is greater than π radians (or 180 degrees), then the spherical excess (the amount by which the sum of the angles exceeds π) is equal to the area of the spherical triangle on a unit sphere. ¶ In simpler terms, for a spherical triangle, the difference between the sum of its angles and π radians (180 degrees) gives the area of the triangle when the sphere is of unit radius. This theorem is fundamental in spherical geometry and helps relate angular measurements directly to areas on a sphere.

This gets the basic topic right ("is a result in geometry related to spherical triangles", involves area or spherical excess) but everything else about the answer, starting with the mathematician's identity, is completely wrong.

If I tell it that this is incorrect, it repeats a random assortment of other statements, none of which is actually the theorem I am asking about. E.g.

> [...] In a spherical triangle, if you have a spherical triangle with vertices A, B, and C, and the sides of the triangle are a, b, and c (measured in radians), then: ¶ cos⁡(a)cos⁡(b) + sin⁡(a)sin⁡(b)cos⁡(C) = cos⁡(c). [...]

or

> [...] In a spherical polyhedron, the sum of the angles at each vertex is equal to 2π radians minus the sum of the interior angles of the faces meeting at that vertex. [...]

If you want to know what Lexell's theorem actually is, you can read the Wikipedia article I wrote last year: https://en.wikipedia.org/wiki/Lexell%27s_theorem

> every spherical triangle with the same surface area on a fixed base has its apex on a small circle, called Lexell's circle or Lexell's locus, passing through each of the two points antipodal to the two base vertices.

The problem ChatGPT has is that it's not able to just say something true but incomplete such as "I'm not sure what Lexell's theorem is or who Lexell was, but I know the theorem has something to do with spherical trigonometry; maybe it could be found in the more comprehensive books about the subject such as Todhunter & Leathem 1901 or Casey 1889".

Instead it just authoritatively spouts one bit of nonsense after another. (Every topic I have ever tried asking it about in detail is more or less the same.) The incorrect statements range from subtly wrong (e.g. two different things with similar names got conflated and some of the properties of the more common one were incorrectly applied to the other) to complete nonsense (jumbles of technical jargon strung together that are more or less gibberish). It's clear if you read carefully about any technical topic that it doesn't actually understand what it is saying, and is just combining bits of vaguely related material. Answers to technical questions are almost never entirely technically accurate unless you ask a very standard question about a very basic topic.

Anyone using it for any purpose should (a) be already pretty media literate with some domain expertise, and (b) be willing to carefully verify every part of every statement.


Can't argue with that. Your earlier point is the key: "e.g. the answer you want isn't already included in Wikipedia." Anything specialized enough not to be covered by Wikipedia or similar resources -- or where, in your specific example, the topic was only recently added -- is not a good subject for ChatGPT. Not yet, anyway.

Now, pretend you're taking your first linear algebra course, and you don't quite understand the whole determinant thing. Go ask it for help with that, and you will have a very different experience.

In my own case, what opened my eyes was asking it for some insights into computing the Cramer-Rao bound in communications theory. I needed to come up to speed in that area awhile back, but I'm missing some prereqs, so textbook chapters on the topic aren't as helpful as an interactive conversation with an in-person tutor would be. I was blown away at how effective GPT4o was at answering follow-up questions and imparting actionable insights.


A problem, though, is that it is not binary. There is a whole spectrum of nonsense, and if you are not a specialist it is not obvious to figure out the accuracy of the reply. Sometimes by chance you end up asking for something the model knows about for some reason, but very often not. That is the wrong aspect of it. Students might rely on it in their 1st year because it worked a couple of times and then learn a lot of nonsense among the truthy facts LLMs tend to produce.

The main problem is not that they are wrong. It would be simpler if they were. But then, recommending students to use them as tutors is really not a good idea, unless what you want is overconfidently wrong students (I mean more than some of them already are). It’s not random doomsayers saying this; it’s university professors and researchers with advanced knowledge. Exactly the people that should be trusted for this kind of things, more than AI techbros.


We could probably find a middle ground for agreement if we said, "Don't use current-gen LLMs as a tutor in fields where the answer can't be checked easily."

So... advanced math? Maybe not such a good idea, at least for independent study where you don't have access to TAs or profs.

I do think there's a lot of value in the ELI5 sense, though. Someone who spends time asking ChatGPT4 about Galois theory may not come away with the skills to actually pass a math test. But if they pursue the conversation, they will absolutely come away with a good understanding of the fundamentals, even with minimal prior knowledge.

Programming? Absolutely. You were going to test that code anyway, weren't you?

Planning and specification stages for a complex, expensive, or long-term project? Not without extreme care.

Generating articles on quantum gravity for Social Text? Hell yeah.


No, I don't support this.

A statement I would support is: "Don't use LLMs, for anything where correctness or accuracy matters, period, and make sure you carefully check every statement they make against some more reliable source before relying on it. If you use LLMs for any purpose, make sure you have a good understanding of their limitations, some relevant domain experience, and are willing to accept that the output may be wrong in a wide variety of ways from subtle to total."

There are many uses where accuracy may not matter: loose machine translation to get a basic sense of what topic some text is about; good-enough OCR or text to speech to make a keyword index for searching; generation of acceptably buggy code to do some basic data formatting for a non-essential purpose; low-fidelity summarization of long texts you don't have time to read; ... (or more ethically questionably, machine generating mediocre advertising copy / routine newspaper stories / professional correspondence / school essays / astroturf propaganda on social media / ...)

But "tutoring naïve students" seems currently like a poor use case. It would be better to spend some time teaching those students to better find and critically examine other information sources, so they can effectively solve their own problems.

Again, it's not only old theorems where LLMs make up nonsense, but also (examples I personally tried) etymologies, native plants, diseases, translations, biographies of moderately well known people, historical events, machines, engineering methods, chemical reactions, software APIs, ...

Other people have complained about LLMs making stuff up about pop culture topics like songs, movies, and sports.

> good understanding of the fundamentals

This does not seem likely in general. But it would be worth doing some formal study.


> Anything specialized enough not to be covered by Wikipedia or similar resources [...] is not a good subject for ChatGPT.

Things don't have to be incredibly obscure to make ChatGPT completely flub them (while authoritatively pretending it knows all the answers), they just have to be slightly beyond the most basic details of a common subject discussed at about the undergraduate level. Lexell's theorem, to take my previous example, is discussed in a wide variety of sources over the past 2.5 centuries, including books and papers by several of the most famous mathematicians in history, canonical undergraduate-level spherical trigonometry textbooks from the mid 20th century, and several easy-to-find papers from the past couple decades, including historical and mathematical surveys of the topic. It just doesn't happen to be included in the training data of reddit comments and github commit messages or whatever, because it doesn't get included in intro college courses so nobody is asking for homework help about it.

If you stick to asking single questions like "what is Pythagoras's theorem" or "what is the most common element in the Earth's atmosphere" or "who was the 4th president of the USA" or "what is the word for 'dog' in French", you are fine. But as soon as you start asking questions that require knowledge beyond copy/pasting sections of introductory textbooks, ChatGPT starts making (often significant) errors.

As a different kind of example, I have asked ChatGPT to translate straightforward sentences and gotten back a translation with exactly the opposite meaning intended by the original (as verified by asking a native speaker).

The limits of its knowledge and response style make ChatGPT mostly worthless to me. If something I want to know can be copy/pasted from obvious introductory sources, I can already find it trivially and quickly. And I can't really trust it even for basic routine stuff, because it doesn't link to reliable sources which makes its claims unnecessarily difficult to verify. Even published work by professionals often contains factual errors, but when you read them you can judge their name/reputation, look at any cited sources, compare claims from one source to another, and so on. But if ChatGPT tells you something, you have no idea if it read it on a conspiracist blog, found it in the canonical survey paper about the topic, or just made it up.

> Go ask it for help [understanding determinants], and you will have a very different experience.

It's going to give you the right basic explanation (more or less copy/pasted from some well written textbook or website), but if you start asking follow-up questions that get more technically involved you are likely to hit serious errors within not too many hops which reveal that it doesn't actually understand what a determinant is, but only knows how to selectively regurgitate/paraphrase from its training corpus (and routinely picks the wrong source to paraphrase or mashes up two unrelated topics).

You can get the same accurate basic explanation by doing a quick search for "determinant" in a few introductory linear algebra textbooks, without really that much more trouble; the overhead of finding sources is small compared to the effort required to read and think about them.


Are you using the free version? GPT 4 Turbo (which is paid) gives this:

> Lexell's theorem is a result in geometry related to triangles and circles. Named after the mathematician Anders Johan Lexell, the theorem describes a special relationship between a triangle and a circle inscribed in one of its angles. Here's the theorem:

Given a triangle \(ABC\) and a circle that passes through \(B\) and \(C\) and is tangent to one of the sides of the angle at \(A\) (say \(AB\)), the theorem states that the circle's other tangent point with \(AB\) will lie on the circumcircle of triangle \(ABC\).

In other words, if you have a circle that touches two sides of a triangle and passes through the other two vertices, the point where the circle touches the third side externally will always lie on the triangle’s circumcircle. This theorem is useful in solving various geometric problems involving circles and triangles.


This is still completely incorrect. (Though now it has the mathematician's name right.)


So, I'm a professor, and I have, um, really strong opinions about this. :-) Perhaps too strong and long for the current forum. But I'll see if I can be brief.

ChatGPT is really, really good at providing solid answers of varying levels of detail and complexity to hyper-common questions such as those used in problem sets. This is one part of the skill set of a tutor, and it's a valuable one.

When I interview TAs for my classes, however, I actually put a lot more emphasis on a different skill: The ability to get into a student's head and understand where their conceptual difficulty or misunderstanding is. This is a very different skill, and it's one that ChatGPT isn't as good at, because we've gone from "maximum likelihood answers from questions that are in the middle of the distribution" into a wide range of possible sources of confusion, which the student may lack the words to explain in a precise way.

In the case of my kid, the PV=nRT question manifested as "I don't get it!" (with more exclamation points).

Asking ChatGPT (well, copilot, since I have institutional access to that, but it uses ChatGPT) to help understand the problem: It digressed and introduced Boyle's Law, threw in a new symbol "I" (ok, the 12 year old) had never seen for "proportional to", and ... in some sense just added to the cognitive overload.

The human approach was to ask a question: Have you ever seen this equation before? (No) Oh! Well, let's talk a little about gases..

Now, responding to ChatGPT and asking "No, that didn't help. Please ELI5 instead?" actually produced a much better answer: An analogy using a balloon. Which, amusingly, is exactly how I explained the behavior of gases to her.

But even here, there's a bit of a difference: In explaining it to her, I did so socratically:

"Ok, so imagine a balloon. If you heat the air inside the balloon, what happens?"

"Um, it gets bigger, right?"

"Yup, ..." (and now, knowing that she got that part, we could go on...)

That's something you can absolutely imagine trying to program around an LLM, but it's not a native way of interacting with it.

So ... I'd instead be a little more cautious here and say that ChatGPT potentially provides a really useful piece of what a human tutor offers, but it loses on the interactive exchange that helps much more rapidly zoom in on the source of confusion and correct it. Assuming that it's right.

I think that for a particularly sophisticated consumer, it can be more valuable, but it requires knowing what you don't know, in some sense: The ability to already isolate what you're confused by. Once you know the question to ask, ChatGPT can often provide it -- again, assuming that some quirk of its training or your phrasing doesn't cause it to generate an answer that's wrong in some way.


> The ability to get into a student's head and understand where their conceptual difficulty or misunderstanding is.

I experienced that first hand as someone who just enjoyed math and had several courses from uni, and tried to help my SO and a few friends which struggled hard with different pre- or entry-level college math courses. They all needed quite different approaches to be able to understand the material.

For one I had to go all the way back and re-learn basic algebra as they had had a poor teacher which hadn't properly taught that. It would manifest in not understanding steps, not being able to solve equations properly and so on.

One really didn't get the visual graph explanation of derivation of composite functions, and instead got it by deriving the formula and using it in several examples. An approach which didn't work with the others as they needed the graph as a reference or motivation.

Was a very interesting experience, and as you say a very different challenge from just knowing the source material well.


I wonder if you are getting very different results with Copilot relative to what I've seen lately from ChatGPT4o. It did a pretty good job with the successive "Explain Galois theory like I am 5/16/133" prompts that I tried earlier. Seems likely that the chat system prompt is different from Copilot's, but who knows how.

I will say that I have asked it to solve gas-law problems before, back when the base ChatGPT4 model was in use, and it didn't do so well there. It came up with the correct logic but I found it hard to get it to use consistent units, so the numbers kept coming out wrong.

In general I find it super useful at improving my conceptual understanding of things -- it's at least as good as a good human tutor, at levels up to and including fairly-advanced signal processing topics -- but the details are less reliable. Feeding the same question to Gemini is a good way to double-check the numbers in my experience. If they agree, the numbers are probably right, while if they don't, then it's likely they're both wrong.


Can you ask it to explain galois theory like he's 21?


----------------

"Let’s imagine Évariste Galois survived that fateful duel and continued his work. What would he have done next? If Galois had lived past 21, he would have likely expanded and deepened his revolutionary ideas. By that time, he had already laid down the foundation of what we now call Galois theory, but there were still many directions he could have explored further.

Generalization of Galois Theory: Galois had just scratched the surface of the powerful connection between field theory and group theory. His next logical step might have been to generalize his ideas to more complex situations, such as understanding the symmetries of solutions to equations in multiple variables or extending his work to non-polynomial equations.

Exploration of Solvable Groups: Galois introduced the concept of solvable groups to determine whether a polynomial equation could be solved by radicals (using addition, multiplication, and roots). If he had lived longer, he would likely have delved deeper into classifying different types of groups and exploring their properties. This could have led to a more detailed theory of group structures, potentially anticipating parts of what we now know as modern group theory.

Linking Galois Theory with Other Areas: Galois might have explored connections between his theory and other areas of mathematics, such as geometry, number theory, or even early versions of what we now call topology. For instance, he could have investigated how the symmetries of geometric shapes relate to the symmetries of algebraic equations.

Developing Further Applications: Beyond theory, Galois could have sought applications of his work in more practical areas. Perhaps he would have applied his ideas to solving problems in physics, mechanics, or other emerging fields, where symmetry plays a crucial role.

Inspiring the Next Generation: If Galois had lived, he might have become a prominent teacher and mentor, influencing a new generation of mathematicians. His radical ideas would have challenged and inspired others to push the boundaries of mathematics even further. Galois could have spearheaded a new movement in algebra, similar to how Einstein revolutionized physics.

Refining His Work: Galois' original manuscripts were hastily written under dire circumstances. Given more time, he would likely have polished and clarified his ideas, making them more accessible to other mathematicians of the time. This might have sped up the acceptance and expansion of his ideas in the mathematical community."

----------------

I'd give it a C for effort, I guess. I don't know enough about the topic to ask it for more specific predictions. At the end of the day it can only work with what it was trained with, so any genuine insights it comes up with will be due more to accident than inspiration.


I usually just downvote and move on, but this one's actually interestingly wrong.

> solutions to equations of multiple variables

Multivariate Galois theory is a thing. See e.g. https://icerm.brown.edu/materials/Slides/htw-20-mgge/Galois%...

> extending his work to non-polynomial equations

This is like a forester extending their work to non-forests. The person can learn to do other things, but those things aren't in any way an extension of forestry.

> Exploration of Solvable Groups […] Linking Galois Theory with Other Areas

This doesn't say anything.

> Perhaps he would have applied his ideas to solving problems in physics, mechanics, or other emerging fields, where symmetry plays a crucial role.

Still isn't saying anything, but if I pretend this has meaning: he was born about a century early for that.

> he might have become a prominent teacher and mentor, influencing a new generation of mathematicians.

He's far more likely to have been a political revolutionary. By the time of his death, academia had excluded him about as much as was possible.

> Given more time, he would likely have polished and clarified his ideas, making them more accessible to other mathematicians of the time.

Probably!


Ah yes I know that exact problem. AoPS intro to algebra. That question was great for getting an intuitive sense of proportion, though I do remember one of the sub problems gave me some trouble.


That's the one! It was a very nice example. I suspect for some students they could ignore the physics, but daughter needed to walk through the physical interpretation of the components before getting into the math.

From my perspective as a tutor, it was a good use of time (gotta learn it some day anyway, and it provides useful physical intuition throughout life), but I could see it causing frustration if someone just wanted to learn algebra or didn't have a resource to turn to.

(Love those books. I went and asked all of my colleagues who had won teaching awards, what books they recommended, and all of them said aops)


I owe a great deal of my mathematical maturity to going through nearly every AoPS book published in middle and high school :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: