Hacker News new | past | comments | ask | show | jobs | submit login

Honestly a lot of the confusion that I've seen in novice programmers over the years comes from the fact that nobody explained to them (1) that a program is a set of instructions that are executed one after the other by the computer, and/or (2) that memory is like a long line of boxes, each of which can hold a byte.

Once someone understands (2), pointers are incredibly simple and obvious.




I have been surprised several times, when helping people to write "their first program" (which, in science, isn't that rare), that it can take days to internalize the relevance of ordering statements in a program. It's deep in every coder's subconscious that the bottom line depends on the top one and not the other way around, and I have had to consciously restrain myself from saying "whaaaaat?!" when I saw how my colleague tried to do something.


I'm glad you've learned to restrain yourself because that mistake makes sense if you're not used to imperative thinking. It doesn't benefit students to have instructors who are shocked by common/natural errors.

Math, as commonly encountered, is a collection of facts and relations. Their notion of variable is very different than in CS and programming.

To a mather (one who uses/does math), this makes sense:

  x = 7 * y
  y = 6
  therefore x = 42
To a coder (of most conventional programming languages) that's an error, y is undefined when x is assigned a value. And even if it did have a value, changing it later wouldn't change x so who knows what x could be.

It makes sense to the mather because each statement exists at the same time. To the coder, each statement is realized over a period of time. To the coder "y = 6" hasn't happened yet when "x = 7 * y". This is the part that has to be communicated to the novice with a mathematical background.


There are non-imperative logic programming languages (Prolog) where something like that would work. And those of us who started with imperative coding are probably just as confused by them in the other direction.


That's why I qualified the statement with "conventional". Prolog is not a conventional language, in the sense that most programmers don't know it (and I'd wager close to half of programmers have never even heard of it, if not more). C, C++, Java, JavaScript, PHP, Python, Perl, Ruby, even Rust are all imperative languages at their core and represent the dominant group of languages in conventional use.

If the OP was teaching people a language like Prolog then there wouldn't have been the same confusion.


Verilog and other HDL languages work this way, its not an accident humans would start this way as well as the natural world works in parallel.


I actually started writing a comment about that. Teaching VHDL (my experience) to most programmers is an entertaining experience because they expect effects to be immediate, not executed collectively and simultaneously but mediated by a clock or other mechanism. So in an HDL:

  x <= y;
  y <= y * 20;
These two effects actually occur simultaneously, which is different again from what mathematically inclined learners might expect since there is still mutation. It is more akin to the common notation:

  x' = y
  y' = y * 20
Where the ' indicates the next value of x or y (as opposed to the derivative). Or perhaps:

  x_n = y_{n-1}
  y_n = y_{n-1} * 20
And many programming languages don't even have a good notion for similar parallel evaluation and assignment. Python does, with:

  x, y = y, y * 20
Common Lisp has psetf which permits this:

  (psetf x y
         y (* y 20))
Languages with pattern matching/destructuring bind are your best bet for writing this directly without needing temporary variables.


What the typical programmer doesn't even realize is that this is an artifact of Von-Neumann architectures (and their assembler language) and optimization of early compilers, not a essential property of computation.

Spreadsheets, expression-rewriting languages or any other functional programming language don't necessarily have this limitation, and declarations can be made in any order if the compiler supports it.


I've noticed this as well, and I've concluded that the actual surprising part for new coders is mutability. In basically every domain where non-developers invent algorithms, everything is immutable - spreadsheets, math!. Heck, even reading an article - the words you've already read don't change their meaning just because you've kept reading.

Mutability is really a result of the fact that computers are physical machines. So I think if you're going to teach software development, you should really start at the machine architecture and build up (ala NAND to Tetris) or start with an immutable-by-default language and only dip into mutation as an "advanced" concept, only to be used by experts and definitely not by beginners.

I endorse the second way, as I've seen it work very well, whereas I've seen more than one ship crash on the shore of understanding pointers...

Of course, my N is small.


I once had the surprising experience of having to help a postdoc with a PhD in mechanical engineering wrap his head around a block of Python code he was reading that looked like:

    x = f0()
    x = f1(x)
He was just mentally blocked on how all of these things could be true at the same time, and/or on how a variable could be used in setting its own value, since assigning a new value to x would change the value that had been passed to f1! His intuition was that this meant some kind of recursion-to-stability was at play.

All he needed was to be informed that this could be equivalently represented as

    x = f0()
    y = f1(x)
or

    x = f1(f0())


It is like recipe. And in general it is the way we read books.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: