Hacker News new | past | comments | ask | show | jobs | submit login

Is this a reasonable comparison? I‘m unfamiliar with these works but would someone be able to work through them without the basic math knowledge that they acquired in school? If not then they are not an alternative to math in school?



I can say I’ve worked through four of his books from front to cover. You’re right. The learnings are on top of (some) math. In my case actually quite a lot of math post-high school. But I think a 14 year old could finish the books as well, or at least 90%.

My eldest is 10. He is good at math because we do math games all the time. I can’t get algebra to sink in properly yet. Doesn’t really click in the time we have and with the teacher he has (me). But in Python he is giving variables a name all the time. Perhaps there is something about programmatic math that helps, not hinders education.


> can’t get algebra to sink in properly yet. Doesn’t really click in the time we have and with the teacher he has (me). But in Python he is giving variables a name all the time. Perhaps there is something about programmatic math that helps, not hinders education.

On the hinders side though, I can understand it being confusing to try to grasp:

    y = x + 2
As being sort of 'constantly updating', referring to x in abstract, when you've just learnt that Python will assign (once) the result of computing x (at that time) + 2 to y.

You could def y as being x() + 2, but you might just be delaying the confusion to when you try to introduce functions in math and find yoursel saying 'no no that was just a hack to get python to behave more like algebra'.

Is there any language that allows this I wonder? Where you can do something like:

    y = &(*x + 2)
That is, y is a pointer to the result of adding 2 to the value pointed to by x, and this is somehow kept up to date, by tracking references like garbage collection perhaps, or at compile time by inserting the code to update y whenever x is updated. (Names would need to be globally unique I suppose, or else x transformed in to some kind of struct that could contain y and other dependents I suppose, and then also a link from y to x so you can remove that if y definition changes.. but that's a lot of load for the humble assignment operator!)


The opposite confusion is actually something that really surprised me, working with beginner programmers for a while. The mistake that y=2+x should set up some constant relationship where y updates along with x should be expected for people who’ve been doing math their whole lives and not programming, but it is pretty confusing to figure out their confusion the first time.


It had never really occurred to me before writing that comment. Now I think it's really curious that not only there is that stark difference, but also that it's common across all programming languages (I'm not counting Excel, sorry) I'm aware of.

It might be tempting to think it's procedural vs declarative, but it's not really, you could procedurally evaluate that y should now be maintained to have such value. Admittedly you can't really do it any other way on declarative terms. But what allows you to do computation like that? Datalog maybe?

I'm ignoring Matlab/Octave/etc. because I'm not that familiar and of course (I assume) they work more like math. What's interesting is that everything else seems to do it differently; I'm not sure why or how that happened. Is it as simple as an early limitation (in terms of early computation and compilers) that stuck?


In matlab, the = sign is an assignment operator. I think it is really common because load/compute/store is what the hardware does, and so languages tend to be built around computing a value and then assigning it.

You could, I guess, in any language that allows operator overloading (Python would be a good one) define operators that actually return a function for computation, guess. This just seems like a really complicated way of defining a function, though.

Or possible a template library like Eigen could be thought of in terms of “building a computation.” Maybe CUDA has something like this? I’m not sure, at all.

An aspect for my case specifically, which is probably not a coincidence, is that I usually work with Electrical Engineering students. They’ll typically have seen some very light Verilog in some of their intro classes. I’m definitely not a Verilog expert, but you can use it to define combinatorial circuits—in that case you are literally describing the wires and gates that the computation flows through, so the left hand side does change based on the right hand side, continuously. At least until you write to a clocked register. And of course in a real circuit, there’d be some signal propagation time…


As with a lot of things, the answer is Excel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: