Hacker News new | past | comments | ask | show | jobs | submit login

I remember first encountering classes. I simply could not understand what they were from reading the documentation. Later, I picked op Bjarne's C++ book, read it, and could not figure out what classes were, either.

Finally, I obtained a copy of cfront, which translated C++ to C. I typed in some class code, compiled it, and looked at the emitted C code. There was the extra double-secret hidden 'this' parameter. Ding! The light finally went on.

Years later, Java appears. I couldn't figure that out, either. I thought those variables were value types. Finally, I realized that they were reference types, as they didn't need a *. I sure felt stupid.

Anyhow, I essentially have a hard time learning how languages work without looking at the assembler coming out of the compiler. I learn languages from the bottom up, but I've never seen tutorials written that way.




I understand this thought process, but in my opinion it's the wrong way to think about software concepts. Understanding what a bridge is doesn't mean knowing how to build one, and in fact tying your understanding to a certain implementation of a bridge just limits your ideas about what is, in fact, an abstract concept.

We understood functions as "mappings" between objects for hundreds of years, and when programming came along it gave us new ways to think about functions, but being able to "make" a function in hardware/software doesn't actually change what a function is at its core.

There's a reason why computer science professors explain concepts at a high or abstract level and don't jump into implementation to help students understand them. It's because the concepts ARE the high-level meanings, and if you need to see an implementation then you're not really understanding the idea for what it is.

If an idea stays abstract in your mind, it gives you more flexibility with how you apply it and more ways to find use out of it. But it does take a mindset shift, and is an actual learned skill, to be able to see something as purely abstract and accept that how it's made doesn't matter.

-- Edit - just realized who I was replying to. So take this comment as not meant to be lecturing, but just a 2c to offer :).


> if you need to see an implementation then you're not really understanding the idea for what it is.

I 100% disagree. At the very least, I think you're wrong if your assumption is that such a statement applies in general. That statement certainly doesn't fit me as well as many people I've taught in the past. I got my PhD in (pure) mathematics and I could only understand high level abstractions _after_ I worked through many concrete examples. I know the same applies for many successful mathematical researchers because we discussed the subject at length. Now such a statement certainly does apply for _some_ people (I've taught them as well), but certainly not all.

If you're someone that likes this sort of abstract thinking, that's great. If you're someone that needs concrete examples to understand, that's great too. The real lesson is that everyone learns differently.


But there's a difference between trying to understand, say, a theorem by applying it in concrete situations and by studying its proof.


> But there's a difference between trying to understand, say, a theorem by applying it in concrete situations and by studying its proof.

There may be a difference or there may not. You could study a proof within the context of a specific example. That is how I usually would do it. But yes of course it's possible not to do like that (many people don't study proofs that way). In any case, I don't really understand your point.


I was drawing an analogy, in this particular context, between studying the proof and taking something apart.


I didn't understand how engines worked until I took them apart, either. I was taking things apart to understand them long before computers :-)

But the notions of sending a "message" to a "method" just was way way too handwavy for me. I like examining the theory after I learn the nuts and bolts.

> Understanding what a bridge is doesn't mean knowing how to build one

If you don't know how to build one, you don't understand them. Architects who get commissions to design innovative skyscrapers definitely know how to build them.


Some months ago I had to learn React for a project. I was struggling a lot. The documentation was so poor. And everything looked so inconsistent (it still does...) and the "solutions" on stackoverflow looked so arbitrary. Until I remembered the lessons I had learned in the past and sat down and studied how it was internally working. Then things started to make sense.

I think it's an advantage to be able to mentally map high-level constructs to low-level operations. (Edit) Learning the low-level stuff first can help to understand what problems high-level languages are trying to solve. For example, the first languages that I learned were assembly and BASIC. Many people said that learning such low-level languages would make it harder for you to learn abstract thinking and structured programming with high-level languages. For me, it was quite the opposite. Writing complex programs in BASIC was so cumbersome, it made me appreciate programming in C with local variables and data structures. After mastering function pointers in C and discovering that you can elegantly express operations on "living" data structures with them (I wrote simple game engines and world simulations for fun), the concept of messages and methods in OO languages looked so natural when I first learned about them. And once you witnessed the mess of complex interdependencies in large programs (with or without multithreading), immutability and functional programming looked like the right way.


> Writing complex programs in BASIC was so cumbersome, it made me appreciate programming in C with local variables and data structures

My language trajectory was approximately from BASIC to Pascal to C to Common Lisp, but I had a very similar reaction. My move from C to Common Lisp probably had the greatest increase in appreciation because a task I semi-failed to complete in three years of C programming took me six months in CL, which was perfectly suited to the task in hand.

(the C task was postgraduate work in AI, in the late 1980s. As well as the importance of choosing the right language for the task, I also learned a lot about the importance of clearly agreed requirements, integration tests and software project management, none of which really existed on the four-person project I was working on).


On the other hand, you can walk over a bridge any amount of times without understanding the nuts and bolts of it. There are plenty of ways to build a bridge, there are static concrete bridges, there are bridges that can open, there are hanging bridges and a lot more variants. But for the people making use of them, the implementation matter a lot less than the purpose - connecting two places.

But yes, you are of course right in that if you build bridges, you need to understand the mechanics, and someone that builds a language of course need to have a deeper understanding of how the underlying abstractions work together than most of the users will have.


Thank you, what you wrote is exactly what I meant.


Is it possible to really learn how programming concepts work, though? Modern optimizing compiler are pretty amazing and just seeing the assembly output may make a concept harder to grasp.

To your bridge analogy. At this point what we are saying is "give me a method to traverse this river" and compilers are either building bridges, shooting out maps to fallen trees, or draining the river all together. If you looked at that output you might consider "traversing rivers" to only be walking over natural bridges.

This gets even more sticky when talking about types. Computers don't care about types, they are strictly a concept to help developers.

Or to your class point, would you know better what a class does if you pulled up godbolt and found that the this pointer is completely removed? You might come to the mistaken conclusion that classes are simply a name-spacing technique.


> Is it possible to really learn how programming concepts work, though? Modern optimizing compiler are pretty amazing and just seeing the assembly output may make a concept harder to grasp.

They usually have debugging options that let you read the internal steps (SIL, LLVM, GIMPLE, etc). That can be easier to understand than full asm, but also, the asm can't hide anything from you unless it's obfusticated.


> the asm can't hide anything from you unless it's obfusticated.

I think that misses the point. It's not the case of the ASM hiding things, it's a case of the optimizing compiler erasing things.

Here's a simple example: You say "Ok, the time I need to wait is 30 minutes. This method takes seconds, so I'll send in 60 (seconds in a minute) * 30 (minutes). The compiler is free to take that and say "Oh hey, 60 * 30? I know that's simply 1800 so I'll put that there instead of adding the instructions for multiplication".

Trying to learn how something like that works from the ASM output would leave you confused trying to reason where that "1800" came from. It's not hidden, it's not obfuscated. It's simply optimized.

That's a simple example, but certainly not the end of the optimizations a compiler can apply. The ASM isn't lying, but it also isn't telling the whole picture. The very nature of optimizing compilers is to erase parts of the picture that's ultimately only there for programmers benefit.


That's true, but often the things it erases are what's confusing you. For instance, it can remove highly abstracted C++ templates and dead code. Or if you don't know what a construct does, you can compile code with and without it and see exactly what it does.

Often new programmers think asm/low level programming is scary, but because it's so predictable it's actually quite easy to work with… in small doses.


> But the notions of sending a "message" to a "method" just was way way too handwavy for me. I like examining the theory after I learn the nuts and bolts.

Ooh-- are we talking about Smalltalk here? Java?

C'mon, Walter. You can't tell half a ghost story and then say goodnight. What did you find staring back at you deep in the void of all that object-orientedness?

Or, if this is a famous ghost story you've told before at least give us a link. :)


> sending a "message" to a "method"

I think you mean “to an object.”

The problem with an approach like yours is that implementations of abstract concepts often vary, and learning the “nuts and bolts” of one does not necessarily give you the true understanding of the concept itself.


Yeah, sending a message to an object via a method.


It’s because the “sending message” explanation is BS unless you talk languages like Erlang. The OOP explanations doesn’t make sense because they are BS. In the real world you don’t ask your shoe to tie itself, you don’t ask a tree to chop itself etc. And in OOP languages you normally don’t ask a string to send itself over a socket or draw itself in 3D using OpenGL. The reality is that you have code operating on data. The same way you have an axe operating on a tree, or your hands operating your shoe laces. That’s it. Everything else is BS.


>>"But the notions of sending a "message" to a "method" just was way way too handwavy for me."

This. I've felt like I haven't been able to keep up with commodity programming because I can't stand the way today's drivers (MSFT) control mindshare by emphasizing frameworks over mechanics. I feel like it's a people aggregation move instead of facilitating more creative solutions. The most enjoyable classes I had in uni were 370 assembler and all the Wirth language classes taught by the guy who ran the uni patent office.


> But the notions of sending a "message" to a "method"

You mean to an “object”?

Also, this looks like a perfect example of theory vs practice, having different implementations of object communication as in classic OOP vs actors etc?


Have you taken Rust appart yet?


it would have helped me a lot.from the outside it looks very complicated and capricious. I really do get the sense that the little axiomatic ur-language inside is a lot more tractable - would love to have learned that first.


same, i'd like to see such an approach


Chris, a friend of mine in college (who is unbelievably smart) decided one day to learn how to program. He read the FORTRAN-10 reference manual front to back, then wrote his first FORTRAN program. It ran correctly (as I said, the man is very smart) but ran unbelievably slowly.

Mystified, he asked another friend (Shal) to examine his code and tell him what he did wrong. Shal was amazed that his program worked the first time. But he instantly knew what was wrong with it - it wrote a file, character by character, by:

1. opening the file

2. appending a character

3. closing the file

Chris defended himself by saying he had no idea how I/O and disk systems worked, and so how could he know that the right way was:

1. open the file

2. write all the characters

3. close the file

and he was perfectly correct. This is why understanding only the abstractions does not work.


> This is why understanding only the abstractions does not work.

I don't think your example shows that at all: If it didn't actually explicitly say in his Fortran reference manual that "The 'thing' you write between a file-open and a file-close can only be a single character", then... Sorry, but then AFAICS your example only shows that he didn't understand the abstraction that "file-open" just opens a file for writing, without specifying what to write. (Maybe he slavishly followed some example in the manual that only wrote one character?)

This needless sprinkling of file-open / file-close looks a bit like he did the work of a current optimising compiler (only here it was pessimising), "unrolled a loop"... So AIUI it shows the opposite of what you set out to show: Too concrete without higher-level understanding was what did him in.


> I understand this thought process, but in my opinion it's the wrong way to think about software concepts. ... just limits your ideas about what is, in fact, an abstract concept.

There's nothing abstract about language constructs. Learning about a language construct via translation is a perfectly fine way of clarifying its semantics, whereas an "abstract" description can easily be too general and fail to pin down the concept; it can also rely on unstated "rules of the game" for how the abstract construct will operate within the program, that not everyone will understand the same way.


I tend to agree with you in principle, but for me too, a lot of high-level features are better understood in term of translations. Objects, coroutine, closures...

Even in formal CS, it's common to define the semantics of a language by translation, which can give more hindsight than operational semantics.

Now that I think of it, I think the problem is that most languages are defined informally which can be imprecise and inadequate.

The translation provided by the compiler is the closest thing we have to a formal semantics, it's natural to rely on it.


I've found that C compiler documentation on how their extensions work to be woefully inadequate. The only way to figure it out is to compile and examine the output. Most of the time, the implementors can't even be bothered to provide a grammar for their extensions.


> The translation provided by the compiler is the closest thing we have to a formal semantics, it's natural to rely on it.

Which translation, though? Depending on your compiler flags, you may get very different translations, sometimes (if your program contains undefined behavior) even with dramatically different runtime effects.


Yes, you are correct. But you are wrong too.

People need complete understanding of their tools. And complete understanding includes both how to use the concepts they represent and how those concepts map into real world objects. If you don't know both of those, you will be caught by surprise in a situation where you can't understand what is happening.

That focus on the high level only is the reason we had a generation of C++ developers that didn't understand the v-table while being perfectly capable of creating one by hand. It's also why we have framework-only developers that can't write the exact same code outside of the magical framework, even when it's adding no value at all.


IMO this is very elitist view of software developers' job.

The analogy from tangible world would be all the bridge engineers using "proven" / "boring" / "regulator endorsed" practices and techniques to build a "standard" bridge versus those constantly pushing the limits of materials and construction machines to build another World-Wonder-Bridge. There is nothing wrong with having both types of engineers.


> There is nothing wrong with having both types of engineers.

Acksherly, yes there is. In this context, there is: The world doesn't need engineers "constantly pushing the limits of materials" when building bridges; let's stick with proven, boring, regulator endorsed practices and techniques for that.


>There's a reason why computer science professors explain concepts at a high or abstract level and don't jump into implementation to help students understand them.

It's because they're trying to teach something to people who don't have anything to build on. Later in their education that'll be different. At the school I attended Object Oriented Programming class had Computer Organization as a pre-req and the teacher would often tangent into generated assembly to help students understand what was happening.

Regardless of whether I agree with your thoughts about the ideal approach to understanding the concepts vs implementation, I live in the status quo where - sooner or later - a C++ programmer is going to encounter a situation where they need to know what a vtable is.


I did not understand what virtual functions were at all until I examined the output of cfront.

Oh, it's just a table of function pointers.


You must have just had the bad luck to have read the most appallingly stupid books. I think you might be even a little older than I, and I got into the game late-ish; those C++ manuals or specs you read were probably from the 1970s or 80s? By the time I learned (imperative- and inheritance-based[1]) OOP from the Delphi manuals in the late nineties the virtual method table was explicitly mentioned and explained, along with stuff like "since all objects are allocated on the heap, all object references are implicitly pointers; therefore, Borland Object Pascal syntax omits the pointer dereferencing markers on object variable names" (which you also mention about C above). I'm fairly certain this gave the reader a pretty good grasp of how objects work in Delphi, but I still know nothing about what machine language the compiler generates for them.

It's not the idea of top-down learning from abstractions that's wrong, it's being given a shitty presentation -- more of an obfuscation, it seems -- to learn about the abstractions from that is the problem.

___

[1]: So yeah, that whole Smalltalk-ish "messaging paradigm" still feels like mumbo-jumbo to me... Perhaps because it's even older than C++, so there never were any sensible Borland manuals for it.


Sometimes textbook descriptions are just needlessly obtuse. I have noticed that there are some concepts which I already understood but if I were first introduced to them via the textbook, I would have been hopelessly confused. I wasn't confused only because I recognized this as something I already knew.


I agree that everyone shouldn't need to know every implementation detail, but I'd argue there should be more emphasis on the low-level details in CS education.

Programming is often approached from a purely abstract point of view, almost a branch of theoretical mathematics, but imho in 99% of cases it's better understood as the concrete task of programming a CPU to manipulate memory and hardware. That framing forces you to consider tradeoffs more carefully in terms of things like time and performance.

You shouldn't be able to hand translate every line of code you write into assembly, but in my experience you write much better code if you at least have an intuition about how something would compile.


I think that's the difference between computer science and programming. Yes, you'll be a better programmer if you focus on the low-level details, but you'll be a worse computer scientist.

I guess, if your goal is to write optimizations, focus on details. If your goal is to find solutions or think creatively, focus on abstractions. Obviously, there’s a lot of overlap, but I’m not sure how else to describe it.


I disagree that you'll be a worse computer scientist. Computer science doesn't happen in Plato's heaven, it happens in real processors and memory.

I tend to think focusing on abstractions is almost always the wrong approach. Abstractions should arise naturally as a solution to certain problems, like excess repetition, but you should almost always start as concrete as possible.


I would disagree with where you are drawing that line. I would say that computer science does happen in Plato's heaven; software engineering happens in real processors and memory. But most of us are actually software engineers (writing programs to do something) rather than computer scientists (writing programs to learn or teach something).


I agree that there is some value to purely theoretical work, but I think this is over-valued in CS. For instance, in the first year of physics instruction at university, problems are often stated in the form of: "In a frictionless environment..."

I think a lot of problems are created in the application of computer science because we treat reality as if there are no physical constraints - because often it is the case that our computers are powerful enough that we can safely ignore constraints - but in aggregate this approach leads to a lot of waste that we feel in every day life.

I think incremental cost should play a larger role in CS education, and if every practitioner thought about it more we would live in a better world.


> I think a lot of problems are created in the application of computer science because we treat reality as if there are no physical constraints - because often it is the case that our computers are powerful enough that we can safely ignore constraints - but in aggregate this approach leads to a lot of waste that we feel in every day life.

ObTangent: Bitcoin.


Undergrad "CS" education, probably for better (considering the career paths and demand), is more about teaching what you call software engineering than what you call computer science.


My company fired two Computer Science PhDs because they knew the theory brilliantly but couldn’t turn it into code. That’s the problem with only learning the theory.


This depends heavily on the context. In The Art of Computer Programming, the analysis of algorithms is done in terms of machine code. On the other hand, the proverbial centipede lost its ability to move as soon as it started wondering about how it moves.


I tend to think you should be able to go back and fourth between mental models. Like obviously when you're thinking through how to set up business logic, you should not be thinking in terms of registers and memory layouts.

But when you're considering two different architectural approaches, you should, at least in broad terms, be able to think about how the machine is going to execute that code and how the data is roughly going to be laid out in memory for instance.


The issue with viewing programming concepts as purely abstract is that the abstractions have varying degree of leakiness. Even with the most mathematically pure languages like Haskell you run into implementation details space leaks which you have to understand to build reliable systems.

There’s certainly something to be said for abstract understanding, but one thing I’ve learned in software and in business is that details matter, often in surprising ways.


> I understand this thought process, but in my opinion it's the wrong way to think about software concepts.

You cannot really tell someone that the way their brain learns is the "wrong way". Different people's brains are wired differently and thus need to learn in different ways.

> Understanding what a bridge is doesn't mean knowing how to build one, and in fact tying your understanding to a certain implementation of a bridge just limits your ideas about what is, in fact, an abstract concept.

Software development is about more than just understanding what something is. It's about understanding how to use it appropriately. For some people, simply being told "you use it this way" is enough. Other people like to understand the mechanics of the concept and can then deduce for themselves the correct way to use it.

I also fall into that category. While I'm not comparing my capabilities to Bright's I do very much need to understand how something is built to have confidence I understand how to use it correctly.

Neither approach is right nor wrong though, it's just differences in the way our brains are wired.

> There's a reason why computer science professors explain concepts at a high or abstract level and don't jump into implementation to help students understand them.

Professors need to appeal to the lowest common denominator so their approach naturally wouldn't be ideally suited for every student. It would be impossible to tailor classes to suit everyone's need perfectly without having highly individualised lessons and our current educational system is designed to function in that way.

> If an idea stays abstract in your mind, it gives you more flexibility with how you apply it and more ways to find use out of it. But it does take a mindset shift, and is an actual learned skill, to be able to see something as purely abstract and accept that how it's made doesn't matter.

The problem here is that abstract concepts might behave subtly differently in different implementations. So you still need to learn platform specifics when writing code on specific platforms, even if everyone took the high level abstract approach. Thus you're not actually gaining any more flexibility using either approach.

Also worth noting that your comment suggests that those of us who like to understand the construction cannot then derive an abstract afterwards. Clearly that's not going to be true. The only difference between your approach and Bright's is the journey you take to understand that abstract.


I like to understand the construction too, it's why I'm in engineering. I'm just saying it shouldn't be necessary in order to understand an idea. For me, it was a crutch I used for years before I grokked how to decouple interfaces from implementations because I naturally understand things better after I build them. But if you use an implementation to understand an idea, you couple implementation to interface in your mind, and so it changes your understanding.


> I'm just saying it shouldn't be necessary in order to understand an idea

Nobody said it is necessary. Some folk just said they find it easier learning this way.

> But if you use an implementation to understand an idea, you couple implementation to interface in your mind, and so it changes your understanding.

That's a risk either way. It's a risk that if you only learn the high level concept you miss the detail needed to use it correctly in a specific language too (OOP can differ quite significantly from one language to another). And it's also a risk that then when you learn that detail you might forget the high level abstract and still end up assuming the detail is universal. If we're getting worried about variables we cannot control then there's also a risk that you might just mishear the lecturer or read the source material incorrectly too. Heck, some days I'm just tired and nothing mentally sinks in regardless of how well it is explained.

There are an infinite ways you could teach something correctly and the student still misunderstand the topic. That's why practical exercises exist. Why course work exists. Why peer reviews exist. etc.

And just because you struggled to grasp abstract concepts one specific way it doesn't mean everyone else will struggle in that same way.


I am not a child psychologist, so take all of this with a grain of salt. I believe children first learn concepts by looking at and playing with concrete things first. "Oh look at this fun thing... Oh whoops I moved it, it looks slightly different, but if I rotate it, it looks like it used to... It doesn't really taste like anything, but it feels hard... Whoa, what's this new thing over here? Oh wait, this is the same size and shape as the thing I played with previously... In fact it behaves just like the first thing did. Oh cool, there's a whole stack of them over here, I bet they work just like the first things did!" This is how one might interpret a baby's first interactions with blocks. Later in life, they might find out about dice and understand some similarities. Later, still in school, the kid learns about cubes in geometry class, and can think back to all the concrete hands on experience he had and see how the various principles of cubes apply in real life.

So, people learn by experiencing concrete things first, and then grouping all those experiences into abstract concepts. Sometimes (ok, often) they'll group them incorrectly: Kid: "This thing doesn't have fur and moves without appendages. It's a snake. Whoa, look at this thing in the water, it moves without appendages either! It must also be a type of snake." Teacher: "That's an eel, not a snake." Kid: "oh. I guess snakes are for land and eels are for water" Teacher: "Water Moccasin is a type of snake that is quite adept in the water." Kid: "oh. They look kinda the same, what's the difference?" Teacher: [performs instruction]

This form of learning by compiling all sorts of concrete things down into a few abstract concepts is so powerful and automatic that we do it ALL THE TIME. It can even work against us, "overtraining" to use an ML term, like with our various biases, stereotypes, typecasting of actors ("this guy can only ever do comedies"). Sometimes folks need a little help in defining/refining abstract concepts, and that's the point that teachers will be most helpful.

So, for me anyway, and I suspect many others, the best way to learn a concept is to get many (as different as possible) concrete examples, maybe a few concrete "looks like the same thing but isn't", and THEN explain the abstract concept and its principles.

Or, to explain the process without words, look at Picasso's first drawing of a dog, and the progressively shinier simpler drawings until he gets to a dog drawn with a single curvy line.


I don't really buy this. It's like saying we should teach about fields before addition of real numbers, or about measure spaces before simply C^n. The most abstract version of a concept is usually much more difficult to grok.


You've never seen tutorials written that way because roughly nobody but you learns programming languages from the bottom up. There is just no demand.

By the way, where can I read a D tutorial from the bottom up?


I just added a -vasm switch to the dmd D compiler so you can learn it from the bottom up!

https://news.ycombinator.com/item?id=30058418

You're welcome!


Walter, watch out. You want to talk about dumbing down? California is considering a new law in which every computer language must have a keyword 'quine' which prints 'quine' . And none of this looking under the hood stuff. That's doing your own research. Trust the computer science! :)


Is there really no demand? Or do those of us who like to learn that way just get used to researching these things ourselves so quietly get on with it. Many of the existing tutorials are at least a good starting point to teach engineers what topics they need to examine in more detail.

Anecdotally, when I've mentored juniors engineers I've had no shortage of people ask me "why" when I've explain concepts at a high level; them preferring I start at the bottom and work my way up. So I quite believe there could be an untapped demand out there.


There’s a difference between understanding something and learning how and why it works the way it does. You can understand how a compilation pipeline works never working with any low-level code and never writing a compiler yourself. You can walk across a bridge and understand it connects point A with point B and don’t understand how a specific bridge has to be constructed. A concrete implementation is just an implementation detail and if you focus too much on it you’ll get tunnel-visioned instead of understanding the concept behind it

EDIT: And I say that as someone who likes both learning and teaching from the ground-up. But there’s no demand for it because that’s not how you efficiently learn the concepts and understand the basics so you can take a deeper dive yourself


> you’ll get tunnel-visioned instead of understanding the concept behind it

You might have gotten tunnel-visioned but it's not a problem I've suffered from when learning this way. And why do you think I cannot understand the concept behind the code after reading the code? If anything, I take the understanding I've grokked from that reference implementation and then compare it against other implementations. Compare usages. And then compare that back to the original docs. But usually I require reading the code before I understand what the docs are describing (maybe this is due my dyslexia?)

Remember when I said everyone's brain is wired differently? Well there's a lot of people today trying to tell me they understand how my brain works better than I understand it. Which is a little patronising tbh.


> You've never seen tutorials written that way because roughly nobody but you learns programming languages from the bottom up.

I am indeed a unique snowflake.


I also have a hard time with learning concepts too if there are handwavey parts of it. I remember by recreating the higher level concepts from lower level ones at times.


To me, the abstraction is an oversimplification of actual, physical, systemic processes. Show me the processes, and it's obvious what problem the abstraction solves. Show me only the abstraction, and you might as well have taught me a secret language you yourself invented to talk to an imaginary friend.


> abstraction is an oversimplification of actual

I think it’s an oversimplification of what abstraction is.


I don't believe most productive programmers learned the quantum physics required for representing and manipulating 1s and 0s before they learned out to program. Abstractions are useful and efficient.

You're more comfortable with a certain level of abstraction that's different from others. I can't endorse others that try to criticize your way of understanding the world, but I'd also prefer if some people who in this thread subscribe to this "bottom up" approach had a bit more humility.


I think part of it comes from believability, or the inability to make a mental model of what is going on under the hood. If something seems magical, you don't really understand what is going on, it can make it hard to work with because you can't predict it's behavior in a bunch of key scenarios. It basically comes down to what people are comfortable with what their axiom set is. It gets really bad when the axiom set is uneven when your teaching it, and some higher abstractions are treated as axiomatic / hand waved, while other higher abstractions are filled in. This is also probably an issue for the experienced, because they have some filled in abstractions that they bring from experience, so their understanding is uneven and the unevenness of their abstraction understanding bugs them.

Like limits in calculus involved infinity or dividing by unspecified number seems non functional or handwavy in itself. Like how the hell does that actually function in a finite world then? Why can't you actually specify the epsilon to be a concrete number, etc? If you hand wave over it, then using calculus just feels like magic spells and ritual, vs. actual understanding. The more that 'ritual' bugs you, the less your able to accept it and becomes a blocker. This can be an issue if you learned math as a finite thing that matches to reality for the most part.

For me to solve the calculus issue, I had to realize that math is basically an RPG game, and doesn't actually need to match reality with it's finite limits or deal with edge cases like phase changes that might pop up once you reach certain large number thresholds. It's a game and it totally, completely does not have to match actual reality. When I digged into this with my math professors, they told me real continuous math starts in a 3rd year analysis class and sorry about the current handwaving, and no, we wont make an alternative math degree path that starts with zero handwave and builds it up from the bottom.


The last time I learned a new programming language (Squirrel), I did so by reading the VM and compiler source code in detail rather than writing code. You get a far more complete picture of the semantics that way! I didn't even read much of the documentation first; it answered far too few of my questions. (Edit:) I want to know things such as: how much overhead do function calls have, what's the in-memory size of various data types, which strings are interned, can I switch coroutines from inside a C function called from Squirrel...


> I want to know things such as: how much overhead do function calls have, what's the in-memory size of various data types, which strings are interned, can I switch coroutines from inside a C function called from Squirrel...

So is that a problem with learning from abstractions, or just simply a problem that this stuff isn't mentioned in the manual?


I do. I recommend it as a way to avoid thinking Haskell is magic, which a lot of people seem to be convinced of. GHC has pretty good desugared printing options.

I'm not sure how to view asm for HotSpot or a JavaScript engine though.


I like my abstractions to be hidden, but I also like to be able to peek under the hood. That's one of the problems of C++ templates, sometimes I want to look at the expanded code.

The GNAT Ada compiler has an option to output a much-simplified desugared code. Not compilable Ada, but very inspectable unrolled, expanded code. Makes for a great teaching tool. 'Aaaaaaah this generic mechanism does that!'

Link https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ug... look up -gnatG[=nn]... Good stuff.


That's how I learnt C too. Couldn't grok how pointers worked. Took a few months to work with assembly. Returned. Didn't have to read any C tutorial. Everything came naturally


Fortunately, I picked up the K+R book after I was an experienced PDP-11 assembler programmer. I had never heard of C before, and basically just flipped through the pages and instantly got it. I quit all the other languages then, too, and switched to C.


To be fair though that's just an extremely well put together book. It's exceptional.

I happen to have the Second Editions of both Kernighan & Ritchie and of Stroustrup's much more long-winded book about his language sitting near this PC.

Even without looking at the actual material the indices give the game away. If I am wondering about a concept in K&R and can't instantly find it from memory, the index will take me straight there. Contrast Stroustrup where the index may lack an entry for an important topic (presumably it was in great part or entirely machine generated and the machine has no idea what a "topic" is, it's just matching words) or there may be a reference but it's for the wrong page (the perils of not-so-bright automatic index generation) and so the reader must laboriously do the index's job for it.

Now, today that's not such a big deal, I have electronic copies of reference works and so I have search but these aren't books from 2022, Stroustrup wrote his book in 1991 and the 2nd edition of K&R is a little older. This mattered when they were written, and when I first owned them. K&R is a much better book than almost any other on the topic.

The book, I would argue, actually holds up much better in 2022 than the language.


> the perils of not-so-bright

Fortunately, my parents defined me out of that category.


There are apparently whole books written about C pointers. It's definitely a topic where the best (?) way to teach it, in my view, is to sit there and just force a student to watch everything I do while I answer questions since you need a push over the activation energy to be able to work things out yourself.


> There are apparently whole books written about C pointers.

Spend 30 minutes teaching him assembly, and the pointer problem will vanish.


Unfortunately that's not quite how C pointers work - there are also things called pointer provenance, type aliasing, and out of bounds UB.


You’re very old school… I love it.

Edit: I just realized from the sister comment who I was replying to. Old school charm for sure. Now I love it even more.


FWIW, thanks a whole lot!

As an engineer in the forties it is somewhat encouraging to read that you felt the same way even if it was about different topics.

For me, these days it is about frontend generally, Gradle on backend and devops. So much to learn, so little documentation that makes sense for me. (I'm considered unusually useful in all projects I touch it seems but for me it is an uphill struggle each week.)

I always win in the end even if it means picking apart files, debugging huge stacks and a whole lot of reading docs and searching for docs, but why oh why can't even amazingly well funded projects make good documentation..?)


Just try, my pretties, just try to understand how exception handling actually works without staring at a lot of assembly.


Ah, doesn’t it just fly over to the nearest “catch”?

Btw, the worst misunderstandings I’ve seen were not lacking knowledge, they actively believed in some magic that isn’t there if you dig deeper. That’s why I still think that teaching at least basic assembler is necessary for professional programming. It can’t make you a low-level bare metal genius, but it clears many implicit misconceptions about how computers really work.


I recently picked up C after years of python, devops and javascript. I realized it's simply impossilbe for me to understand the tradeoffs made when other languages are designed or just understand my Unix-like operating system and other parts of it without knowing enough C. My next target is of course assembly and the compiler. And if anything I know, I want to stay away from any kind of sugar-syntax and unnecessary abstractions on top of basic computer and programming concepts.


Stack unrolling is a complicated process.

It's tempting to think of them as a kind of return value, but most languages do not represent them this way. (I believe it's a performance optimization.)

Flying to the nearest catch can also be complicated, as it's a block, that involves variable creation, and this possible stack changes. Again it's easier to model as a normal block break and then a jump, but that's not the usual implementation.


I always expect them to use (thread-) global state, not unlike good old errno just more structured. There's always at most one bubbling up so that's how I would do it.


But then you'll have a branch on every failable operation and slow down the happy path. This is not too different from passing the error as a value.

Instead, compilers use long jumps and manually edit the stack. I'm not sure it makes a lot of difference today, but branches were really problematic by the time the OOP languages were popularizing.


They're not bad these days if they're predictable. There's a code size cost, but there's also a cost to emitting all the cleanups and exception safety too.

For instance Swift doesn't have exceptions - it has try/catch but they simply return, though with a special ABI for error passing.


I believe that today a cold branch is just gets predicted as not taken and stays as such because it never jumps (in terms of a predictor’s statistics, not literally).


From the assembly one can learn what compilers do. But it cannot teach how modern CPU actually work. I.e. even with assembly reordering, branch prediction, register renames, cache interaction etc. are either hidden from code or exposed in a rather minimal way.


Right, in particular this is vital for Concurrency. In the 1990s my class about multi-tasking began by explaining that the computer can't really do more than one thing at a time. But in 2022 your computer almost certainly can do lots of things at the same time. And your puny human mind is likely not very well suited to properly understanding the consequences of that.

What's really going on is too much to incorporate into your day-to-day programming, and you'll want to live in the convenient fiction of Sequential Consistency almost all the time. But having some idea what's really going on behind that façade seems to me to be absolutely necessary if you care about performance characteristics.


Then there is no much difference between learning C and assembly. Both are languages for an abstract machine that has less and less relation to how things are done for real.


That is true.

But before I learned programming beyond BASIC, I took a course in solid state physics which went from semiconductors to transistors to nand gates to flip flops to adders.

Which made me very comfortable in understanding how CPUs worked. Not that I could design something with a billion transistors in it, but at the bottom it's still flip flops and adders.


> just fly over to the nearest “catch”?

No? (The stack unwinding is a whole process in and of itself.)


Well, I imagine that it would be possible by expressing its semantics using continuations. Implementing exception handling using call/cc seems like one of them favorite Scheme homeworks. And if you implement it that way, you should then know exactly what it does.


Although it's said that call/cc is a poor abstraction that's too powerful, and it'd be better to have its components instead.

https://okmij.org/ftp/continuations/against-callcc.html


Groovy? Gradle is a build tool for JVM.


Gradle.

Thankfully Groovy is not in the equation except for old Gradle files from before the Kotlin Gradle syntax existed.


A thousand times this! At the very least, I always want to have good mental model of how something probably works or mostly works even if I couldn't reproduce the implementation line-for-line. To me, it can almost be dangerous to have the power to use something without any idea of what's under the hood. You don't know the cost of using it, you don't have a good basis for knowing what the tradeoffs and reasons are for using it over something else, and the concept isn't portable if you need to work in a language or environment that doesn't have it.

If I come across a feature I like in a programming language, I usually find myself trying to figure out how to implement or emulate it in a language I already know (ideally one that won't hide too much from me). Implementing coroutines in C using switch statements, macros, and a little bit of state for example.


Since I've taken cars apart and put them back together, that has been very helpful to me in improving my driving skills. I also know when a problem with the car is just an annoyance and when I have to get it fixed. I can often know what to do to get the beast home again, without having to call a toe truck.


Absolutely! I have similar stories with other abstractions too (virtual methods, protocols, exceptions etc)

I just never quite understood where this "don't worry about the implementation" is coming from, as well as the tendency to explain abstractions in general terms, with analogies that make little sense, etc. The "don't worry about the implementation" did so much harm to humanity by producing bloated, wasteful software.

In fact I think a good abstraction is the one (1) whose implementation can be explained in clear terms, and (2) whose benefits are as clear once you know how it's implemented.


I'm similar. I've narrowed down my learning to two things I need:

- Principles for how things fit together. This is similar to your comment about digging into the assembly. Understanding how something is built is one way of determining the principles.

- Understanding of why something is needed. I still remember back to first learning to program and not understanding pointers. I pushed through in reading the book I was using and eventually they talked about how to use it and it finally clicked.


My dad has had trouble understanding classes for decades, but he had mostly stopped programming during that timeframe as well so it wasn't something he was going to put much time into learning. Now, he's returned to programming more regularly but still is having trouble with classes. I figured the big problem for him is exactly what the problem for you was, the hidden this pointer.

I'd started working on manually writing the same code is both C++ and C, but your approach of using something automated is an even better idea. Showing the implicit this pointer isn't hard to do manually, but polymorphism is a bit more of a pain. But I think the best part about using a tool is that he can change the C++ code and see how it affects the emitted C. Being able to tinker with inputs and see how they affect the output is huge when it comes to learning how something works.


Well, there is a difference between understanding "what it does" and "how it does what it does," and conflating the two is often a mistake. I have seen people take complex code apart (e.g. by doing manual macro-expansion), and it was not just a waste of time, it, in fact, hindered their understanding of the framework as a whole.

When learning Git, I enjoyed reading a tutorial that explained it from the bottom up, but, in the end, having been shown, early on, what is merely the implementation detail, created a cognitive noise that is now hard to get rid of.


I had a similar experience in law school in one of my tax classes. I hit a couple things where I could just not get what the tax code, the IRS regulations, or my textbook were trying to tell me.

I went to the university bookstore and found the textbook section for the university's undergraduate business degree programs, and bought the textbook for an accounting class.

Seeing the coverage of those tax areas from the accounting point of view cleared up what was going on, and then I understood what was going on in the code and regulations.


Learning how double-entry accounting works is both very simple and extremely useful for understanding any finance related topics.


I think like many programmers are "bottom up" (including me), I had a hard time understand virtual methods until I read an example on how they were implemented, then I was able to understand what it was and then the explanation why they were useful.

I remember two lessons about networks at school, the first one was "top to bottom" (layer 7 to layer 1), I understood nothing, then there was another one bottom up, and I finally understood networks..


>Anyhow, I essentially have a hard time learning how languages work without looking at the assembler coming out of the compiler. I learn languages from the bottom up, but I've never seen tutorials written that way.

Funny, that's similar to how I learned assembly! I wrote some small program, and then used lab equipment to monitor the bits flipping in the PC circuits...


> Years later, Java appears. I couldn't figure that out, either. I thought those variables were value types. Finally, I realized that they were reference types

I will resist replying to this.


Don't resist too hard, I don't want you to have an aneurysm!


this has been my approach to study also. some people are fine with what some might call "magic" and they never worry about lower level details. anyway if you want an approach from bottom up you should look at learning a lisp, especially common lisp

http://clhs.lisp.se/Body/f_disass.htm


When learning ReasonML it really helped to understand what Variants were by seeing the outputted JS code (being familiar with JS).


Yeah, I also always have the necessity to look what the compiler does with the code you through at it.


Sounds like you have a certain mental model of computation, and you can't understand other types of semantics. I suggest playing with a term rewriting language. If you can grok that without mapping it to your existing mental model, then other languages can be viewed through that lens much more easily.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: