I appreciate the succinct detail they go into regarding how the concept of “learning styles” has been debunked. I’ve always considered myself more of a visual learner, and seeing posts here and elsewhere about this supposedly not being a thing has never sat well with me (though I never delved into the actual research).
Turns out I do in fact agree with this explanation; that it’s more about what’s being taught that should dictate the how:
> While learners have preferred styles, effective instruction matches the content, not learning styles. A science class should use graphs to present data rather than verbal descriptions, regardless of visual or auditory learning styles, just like cooking classes should use hands-on practices rather than reading, whether learners prefer a kinesthetic style or not.
> I’ve always considered myself more of a visual learner, and seeing posts here and elsewhere about this supposedly not being a thing has never sat well with me (though I never delved into the actual research).
My understanding of the research is that your preference for visual learning is real. But your preference doesn't actually translate into better learning outcomes. Ie, you might prefer visual stimulus but the research suggests you'll still learn content just as fast if its presented in other mediums.
I doubt the research accounts for Neurodivergent folks, assuming it even correctly represents Neurotypical.
As with many of these studies, they're treated as fact despite it being clear that it does not address the real-world problems.
Do I actually learn better visually ? No idea, but it's also clear that I don't learn in the same way as the majority of people in the classes I attended, and failed many exams despite being able to achieve the same outcomes.
I think the definition of 'learning' needs to be considered. It is measured by an exam, or is it a true understanding of the subject taught, as these are often two very different outcomes.
> I doubt the research accounts for Neurodivergent folks, assuming it even correctly represents Neurotypical.
I'm curious about this, do we have more recent studies about neurodivergence/neurotypical? I'd be curious how many people are neurodivergent (to significant degrees).
It’s trivial. Define it operationally as a collection of conditions (autism + adhd + depression) and plot the trend. Ask an LLM if you can’t do it yourself. You’ll find the definition you choose matters a lot but autism has increased and it’s worth researching that if you never have before!
I do not see how it would not? Appears to me that responding to exams is at once an expression and practice of the learning. It’s not as if a test situation is isolated from the overall learning process.
If a learner is not oriented towards written or read communication, their exam results may not reflect a written test. If a learner has problems with their practical skills being observed in action, a practical test may not reflect their actual skills or understanding.
> If a learner is not oriented towards written or read communication, their exam results may not reflect a written test
Being unable to communicate what you have learned is a serious disorder and not a learning style issue. Learning styles are about how you learn, not how you demonstrate what you learned.
But yes, people who are unable to write or read or very bad at writing have problems and tend to get extra support on written exams to make it fair for them, as the exam should test their knowledge and not their ability to write. For example they don't give blind people a zero just because they failed to read the exam.
Please see social constructivist or cognitivist learning theory.
Learning and expression of what is learned are part of the same learning process and it is entirely artificial to separate one of them to be part of learning style, and the other as not being part of it.
Virtually all learning is also learning how to express the subject being learnt.
> Being unable to communicate what you have learned is a serious disorder and not a learning style issue.
It's absolutely a learning style issue if those teaching are not able to pick up on the issues with learning if the person being taught is unable to effectively communicate how they are struggling.
I spent years trying to explain to teachers why I did not understand some of the concepts they were telling me, and they would often tell me to just 'accept it' and move on. Bad teachers perhaps, but that's all part of the learning and others seemed to be fine... so that is a learning 'style'.
Your suggestion of 'serious disorder' is also misplaced.
"Neurotypical" means "not neurodivergent". In the colloquial usage, a "neurodivergent" person is someone who is autistic or has ADD/ADHD. More generally, it means someone's cognition and behavior differ from the average to a clinically or socially significant degree, which could include e.g. schizophrenia, bipolar, etc. as well as autism or ADD/ADHD.
I've heard the term before but not researched the thinking behind it. From Google/Wikipedia I'm getting a confused picture and one I'm not sure I like. According to wikipedia the word has its origins in the idea of "neurodiversity", which it describes in these terms:
> Neurodiversity is a framework for understanding human brain function and mental illness. It argues that diversity in human cognition is normal and that some conditions classified as mental disorders are differences and disabilities that are not necessarily pathological.
I absolutely agree with all of that, but the word "neurotypical" seems to suggest that there is a large, probably majority group that can all be lumped together and tagged with a single label, separate from those who are "neurodivergent". I don't think I'm comfortable with that at all.
Given I've been diagnosed with Tourette's syndrome, I am presumably considered "neurodivergent" in this world view. However I don't think there is some "normal" that I deviate from. Everyone I know has an entirely unique mind, and despite the diagnosis I don't see mine as deviating in a way that puts me in a group distinct from the mainstream. I know people who think in ways that are arguably more idiosyncratic than mine, but who haven't been diagnosed with anything and would presumably be judged "neurotypical".
Neurodiversity sounds like a great idea from what little I've just read: stop stigmatising particular "syndromes" and acknowledge the uniqueness and variation present in all human minds. But dividing the world into "neurotypical" and "neurodivergent" people seems like the exact opposite of that. Am I missing something?
It's absolutely a spectrum, just like the individual conditions that make it up are spectra. (Everything is spectra. Is my friend who has arthritis in his otherwise working legs "physically disabled"? It depends on whether he's walking around a store or trying to run a marathon.)
"Neurotypical" here, when used as a binary adjective, means something like "close enough to the average on all the relevant spectra to not particularly benefit from special consideration". The exact line for where that is is going to be blurry and situationally dependent, because it's a shorthand for an approximation.
My usage is generally towards those who refuse to accept that the spectrum largely exists, and you are either 'normal' or 'disabled'.
You'd think there were not that many folk, but unfortunately there are, especially within the older generation and many driven people who believe you're just not 'trying hard enough' or 'making excuses'.
The irony being that many of them are also likely further on the spectrum than average.
It's more about the attitude and approach to life, than the individuals. Many expectations are towards being 'Neurotypical' and do not cater for, or even acknowledge the challenges, of Neurodiversity.
> but the word "neurotypical" seems to suggest that there is a large, probably majority group that can all be lumped together and tagged with a single label, separate from those who are "neurodivergent". I don't think I'm comfortable with that at all.
I personally speculate that the vast majority of the population does have some form of cognitive diversity, it's just that most of it is undiagnosed (with little opportunity or no incentive or too low-intensity to get a formal diagnosis), not formally known to science for diagnosis, or not currently considered an illness/disability.
> But your preference doesn't actually translate into better learning outcomes. Ie, you might prefer visual stimulus but the research suggests you'll still learn content just as fast if its presented in other mediums.
These two are not actually the same, despite the "i.e." connecting them. I'm sure you know that -- just pointing itnout because it's a common sleight of hand when referencing science.
"Failed to show an improvement in learning speed" does not mean "successfully showed an equality in learning speed". The latter is very hard to prove, and is probably the null hypothesis -- we just assume it to be true without evidence to the contrary.
I don't see how you can divorce this from motivation. If I much prefer listening to content instead of reading about it, I'm going to be more motivated to learn the subject matter, which results in me learning more overall.
You will never learn how to cook by listening to it. You will never learn to ski by listening to instructions about it. You will never learn 3d graphics programming by listening to podcast courses.
You will learn a lot about Roman history by listening to it. You will learn a lot about the Spanish language by listening to it.
If you avoid learning things that you can't learn by listening, then you will only ever learn things that can be learned by listening and that will bias your perspective about learning anything.
That's not the point I'm making. The point is that people do have "learning preferences" and these have an ultimate effect on learning.
To use your example: some people prefer listening to Roman history podcasts. Others would rather watch a film. Still others prefer a book. If you want to maximize learning, it seems best to pick the style/format most suited to your preferences.
This is why trying to rank/judge/grade everyone by a uniform standard is almost universally terrible.
Students should be encouraged to try their best in every subject, allowed to make the mistakes they are naturally going to make at whatever level they are currently, and helped to improve over time. Punishing people for being less prepared than peers who did more practice or for making ordinary and expected mistakes actively gets in the way of their learning, as well as making them feel terrible. It's pretty bad for the students who are more prepared as well, as many of them internalize the idea that they are inherently good at some things and inherently bad at others, which is sometimes temporarily gratifying but often stops them from pushing themselves to try anything new or hard.
The problem here was too little testing, not too much. If they tested the students on music before they started they would have been put in classes where they belong and they could have gotten the teaching they needed.
I agree and disagree. I agree in that pressure is good, I disagree that audits are bad because of no pressure. Audits mean that is low priority which means that it will fall off when your busy schedule of "actual" courses have deadlines. Which is not a refletion of your ability nor interest to learn but of your workload.
This is the main reason I hold resentment towards GE's. Not because I don't want to be a well rounded person, but because when 3-4 other major classes are already crushing you the last bit of "pressure" needed is some random music theory or history course quizzing you. I never really got the time to breathe in college, and taking my time woulda been a $20k+ decision on top of the $80k I already had in debt. I literally could not afford to learn properly.
Moreover, the hope is that GEs would mean more well rounded students, liberal thinkers. There is little proof of that outcome being achieved. I wouldn’t want to scrap them though, out of fear of what less well rounded graduates might act like.
> I wouldn’t want to scrap them though, out of fear of what less well rounded graduates might act like.
They would act like Europeans and the rest of the world where GE's aren't a thing. What that means depends on your biases, but it doesn't seem too bad to me. And since silicon valley is mostly foreign software engineers today it doesn't seem like a bad thing for their performance either.
You're right about that, I took an economics class which turned out to be a Marxist indoctrination course. I wasn't interested in wasting time on fairy tales. I then satisfied the GE requirement by taking classes like financial accounting, which I expected to be useful in my career.
External pressure can work in the short term, but it disincentivises taking risk (which is where life-long learning happens), it steps on internal motivation (which is where life-long learning happens), and once the external pressure is removed, the interest in learning drops to further below where it was before the pressure was introduced.
So yes, it works in the short term, but I believe it's a net negative overall.
Grades are great, but not for the pressure they apply on students -- they are a measure of how successful the teacher has been in reaching their students!
> Grades are great, but not for the pressure they apply on students -- they are a measure of how successful the teacher has been in reaching their students!
The students do have some responsibility to learn the material.
> This is why trying to rank/judge/grade everyone by a uniform standard is almost universally terrible.
The point of grades on a universal standard ought not to have anything to do with the students; it should serve as a diagnostic metric for the teacher.
And it can be far more coursely grained than it often is today. In fact, most teachers don't need a finer signal than number of passing and failing students to figure out how well they are doing.
Much like I use the uptime percentage as a signal for which code needs bugfixes and how bold I can be when introducing new features, the teacher can use the fraction passing to determine what needs to be taught differently and how quickly to introduce new material. Of course, schools don't work in a way that makes teacher-led learning possible...
I often end up in this concept of the "real world" when talking about kids and their up-bringing. Iv'e always felt school needs to be much more individually tailored, and this connects to my real-world thinking about kids: I envision how the world "outside" of the family will treat my kids, and what they need to do well out there. Same thing applies to school, really. And in the real world we do not expect everyone to know the same things, do the same job, or have the same personality or talent as everyone else. So why should schools be like this? Its just stupid.
It's usually the teacher who creates the exam. I've seen plenty of bad teachers give extremely easy exams where everyone gets awesome grades even though no one understood anything. Standardized exams are few and far between.
I do agree that there is plenty of destructive stigma associated with repeating a class or grade, but there is no alternative, assuming your grades reflect your actual learning level (which tends to be the case for the low grades, in my experience, even if it doesn't for higher grades). You can't move on to a higher grade if you just didn't learn the basics, you'll be even worse off. What possible point would there be in trying to teach comparative literature to someone who didn't learn to read or write properly?
So, we need to get rid of the stigma, not the practice.
I agree with the sentiment. But I feel the sentiment is comparable to saying we need to rid society of violence. Maybe at a university level we can apply this, but we 1000% need to bring down the cost of tuition before considering spending more money at college.
for grade school, I'm at an impasse. I don't like complaining without offering something actionable, but the scale and existing inequalities of schools is so dire that I don't know where to start. The realistic answer over the decades has been to simply lower the bar overtime, but that doesn't exactly help either.
You are telling me that giving them more education is punishment? No matter how you do it they get left behind, if they continue to get put in classes they aren't ready for that is bad as well.
There are many reasons why students don't learn everything they can from a class and inability to grasp the material is only one one of them, yet it is the only reason that ensures they won't learn more the second time around.
it is massive punishment due to social consequences with peers.
Get better peers? What you're saying is that it's better for students to keep failing so as not to upset their milieu. Oftentimes, the best thing you can do for a struggling child is to take them out of the environment that's holding them down.
No, that one is actually systematic result. That was original reason why they stopped doing it. It did not helped anything. Special help, additional tutors who are actually trained in behavioral and learning issues, those sometimes help. Keeping them back a grade, not much.
> Get better peers?
This real world we live in does not provide better peers. Peers are other kids, those are how they are.
> What you're saying is that it's better for students to keep failing so as not to upset their milieu.
What I am saying is that when they are hold a grade, the system is keeping them failing. They dont get better. It does not magically turns then into better performing students. They will just suck in a way similar to original suck, except that it also leads to them trying even less then before.
More time absolutely helps, people do much better the second time they take a class. Teachers are a good example, they typically didn't get good grades, but after having seen the class over and over so many times they get good enough to teach it. It works, repetition leads to mastery.
It is just that practically it did not happened with elementary school students. Special help class with more repetition and teacher trained in learning issues do help.
Also, the issues of failing students are not easily fixable with repetition. They have often attention issues, learning disabilities, behavioral issues, mental health issues etc. that are not helped by repetition at all.
Where I am, those kids with deeper issues preventing them to learn as quick as others often have to repeat a class, fail again and then get send to special schools.
In theory they should get more appropriate help there, but normally there is not more funding in these special schools and it's even harder because all of the pupils have bigger learning deficiencies. So often these kids are left behind.
The only solution that gives every kid an equal opportunity would mean a massive increase of funding of schools and teachers.
But even if that would happen, we currently do not have enough well educated teachers and the job is also not very popular due to high stress and low pay and new teachers take many years to be available...
with the way our education structure is setup, yes. you fall behind your peers, you are assumed to be dumb or even unteachable, if you're a rising senior, a bad grade (be it sue to external or internal factors) can rescind future prospect.s And after a certain point you have to be kicked out of a school for legal reasons so now you need to work around some other way to earn a GED.
If we're talking college, you now need to either spend more time and especially money to repeat a course, or drop out and give up entirely. it can also disqualify you from scholarships and grants, so it is a direct financial consequence in two ways. Unlike the workforce, you are not given adequate opportunity in academia to fall behind, let alone fail.
-----
I agree in theory that there should be no shame in needing to redo classes and reinforce your learnings, but current societal expectations in traditinal education does no support such a mindset. Another reason I wish there was more awareness and accessibilities in paths outside of grade school -> university to figure out what you enjoy and how to learn it.
Same way we do now: based on who's parents can donate a new wing for the campus to build.
But sure, it's the same problem with any other prestigious venue. Demand far far far outstrips supply. So they don't really need to pick "the best" students. Merely students "over the bar of quality". There's no problem in the eyes of the venue, so there's nothing to change.
I think the implied assumption in this question is flawed to begin with in that not everyone needs to be at a famous institution to succeed. But if you want my likely bad take: sports coaches actually have a pretty decent method of scouting by... well, scouting. seek out local/state/national talent and nurture them years before an app goes in. If they can build a relationship, that's a personal referral that goes farther than any essay prompt.
It's the most flexible method because scouters can tailor from culture to culture, based on qualities that traditional education metrics wouldn't take into account.
It's a good idea! The Math Olympiad was a thing when I was in school. Just need to turn that into a more mainstream competition and build up a much larger culture and business around succeeding in it.
Of course, you need auditive input to study music (although interestingly medieval education would have disagreed with this). But visual depictions also help a lot with music theory. Seeing the structure of, say, a sonata's first movement visually makes it easier to understand certain aspects of it. The same is true of understanding relationships between different keys (e.g. the circle of fifths).
> While learners have preferred styles, effective instruction matches the content, not learning styles. A science class should use graphs to present data rather than verbal descriptions, regardless of visual or auditory learning styles, just like cooking classes should use hands-on practices rather than reading, whether learners prefer a kinesthetic style or not.
I hear ya, but I still don't buy it. Teaching / learning is a fork of the communications heuristic:
"It's not what you say, it's what they hear."
Receivers (i.e., students) have a spectrum of receiving abilities, skills and expectations. Regardless of subject matter, to assume one size fits all (students) is (to put it bluntly) wrong.
Put another way, yes the topic factors is, but ultimately it's about the *individuals* receiving that information.
One of my relatives was a learning styles proponent back in the 1980s and talked about it quite a bit. In my mind it feels like the conversation has moved from "here is a tool for you to consider" to "we have scientifically proven that you don't need to create lesson plans for each of The Five Formal Styles".
It feels like we might be missing the thread if we're able to talk about "debunking" a point of view.
Warning from Daniel Kahneman on "System 1" and "System 2". [1]
> I'm going to use "System 1" and "System 2", absolutely as homunculi. [...] They don't exist. [...] Don't look for them in the brain, because they are not two systems in the brain, of which one does one, and the other does the other. So why am I using this terrible language? I'm using it because I think it's helpful. It fits the way our minds work, and to explain the background of that decision--of why I use "System 1" and "System 2"--I refer you to a very good book. [...] It's by Joshua Foer and it's called "Moonwalking with Einstein". [2]
There is going to be some difference between solving problems in a specific domain and solving problems generally (which is what TFA argues for). And since we really care about the specific domain of software engineering, it makes sense to pry open that difference when possible.
However problem solving in the general case is very close to fluid intelligence and IQ. Some interpretations claim that intelligence in humans is just problem solving, and that problem solving is most of what is captured by g [0]. All problem solving will be positively correlated with all other problem solving, and you would never expect to see someone good at one, but not the other.
In section 9 they cite the research on programming ability and its (expected) relation to general intelligence.
I'm not sure how much of a distinction there is to draw here. Psychometrics has historically been filled with attempts to factor out additional clusters from things like g e.g. multiple intelligences. Those findings often fail to replicate. Section 7 seems more like an attempt to draw a distinction without a difference. While section 9 seems like a standard summary of the research (like most things, a mixture of innate intelligence and cumulative experience).
> Their point is that you can't learn general problem solving.
they're wrong.
The same skills I use to track down a bug can be used to track down a parasitic draw in a vehicle.
If you take me, with my problem solving experience in software, and set me next to someone who is completely new to working on vehicles (IOW, we have the same experience working on vehicles), I'm going to pick it up faster and be better at it.
General problem solving is not just finding what is wrong with a machine, which is the domain you are referring to.
Here are some examples where that 'algorithm' might not give you a headstart:
How should I cook this meal?
How should I direct this presentation to maximize the chance of them to buy the product/service ?
What should I do to not starve if lost in the woods?
How can I detect friendships that are not beneficial for me in the long run?
How can I tell if this media content is true or false?
We all have some sort of familiarity with most types of domains, so you might be inclined to think it is due to your 'problem solving' skill.
However, the point of the article is that you can't transfer your skill level in one domain to another. It's easy to see as if it wasn't that way, recruiting software engineers would be trivial: apply a test for a proxy and that would be your predictor for performance. This is not the case as (afaik) not even big tech has hit the nail with their recruiting processes.
> However, the point of the article is that you can't transfer your skill level in one domain to another.
This is like the study I saw in the 90's where they concluded heterosexual couples are more likely to have children then homosexual couples.
yeah, no shit.
The point _I_ was referring to is the idea that general problem solving isn't a skill.
The absolute best you can come up with is "well it's not perfectly transferrable!".
You know what else isn't perfectly transferrable? The ability for a man to pleasure a woman, it turns out, experience with a particular woman increases that skill.
And yet, no one in their right mind would ever argue that you can't generally be more, or less, skilled in bed.
I would say you can learn general problem solving. There are strategies that help with that an IQ helps with finding patterns.
That said it still would be junior level problem solving in unknown domain. Expert will always run circles around newcomers.
I most likely could switch fields but I don’t want to spend 5 years getting experience. Even if I have decent generic solving skills it might take me less time but still- I would have to be really interested in the topic.
> Expert will always run circles around newcomers.
That isn't true, there are many problems less than mediocre experts fails to solve that a smart junior can manage to solve. Experts are great at common problems but not great at rarer ones, non-standard problems depend more on natural ability than experience.
Of course smart experts can also solve those problems, but you didn't say a smart expert, you just said experts in general.
The ability to generalize based on experience requires intelligence. That is why you have so many expert programmers who can't solve problems well, because they lack the ability to generalize their experience well enough to apply it to new problems.
> Experts are great at common problems but not great at rarer ones, non-standard problems depend more on natural ability than experience.
This is quite contrary to the very definition of "expert" in the study of expertise. Experts are recognised, among other things, because they are the people others come to when they are facing unusual and tough problems.
You may well have a point, but "expert" is not an appropriate word for what you are talking about.
> Experts are recognised, among other things, because they are the people others come to when they are facing unusual and tough problems.
When a person want to make a website, they go to an expert on making websites. That expert doesn't have to be a genius, they just need to be good at making standard websites.
Or if that is hard for you to understand, consider what you see as an expert in other fields. When people say "go see an expert", they mean go see a doctor or a psychologist or similar, they don't mean go see the best doctor or the best psychologist, just someone who is trained and experienced in the field.
Or if they say "hire an expert" they mean hire someone who has worked on this kind of problem before, not someone who is particularly smart.
In a discussion specifically about learning -- which this is -- it would help if people are more precise with their words. Hence my suggestion to rephrase. What you have in mind are journeymen, not experts.
Most doctors, most practitioners in any field, never get higher on their skill tree than journeyman, that is correct. See e.g. the Oxford Handbook of Expertise for more on this.
Some doctors are experts. Those are the doctors other doctors come to for advice in tough cases, or the ones that are consistently helpful in grand rounds.
>Experts are great at common problems but not great at rarer ones
really depends on the field. Tech is such a granular field that you can be an expert in one sub-sub-domain, but not another in the same sub-domain. It really comes down to how specialized you need to be for your work.
>non-standard problems depend more on natural ability than experience.
non-standard problems tend to build off of some base of standard problems. Computers and computer science as we know them today built off a domain of math and electrical engineering. non-standard problems rely on having enough POV's of the problem space (which can be in the same mind or shared among multiple) to produce a new, novel problem space. Which is usually comprised of experts in at least one of the base problems
>The ability to generalize based on experience requires intelligence.
I say it merely requires foundational knowledge, which can indeed be taught and studied. The ability to understand how things are put together is the basis of learning, and better foundations make for faster learners.
Intelligence in this concept is simply a measure of experience.
> I say it merely requires foundational knowledge, which can indeed be taught and studied.
That is what you learn during your education, you can be a fresh junior with much better foundational knowledge than an average expert. Experts have experience in the field which is extremely important, but it solves different kinds of problems they aren't better at everything than a person with good fundamentals and a good mind.
> Intelligence in this concept is simply a measure of experience.
The ability to apply knowledge is intelligence. Smart people can do more with the same knowledge than dumb people. That is why you see so many say their education was pointless, they never figured out how to apply all that knowledge. Such people are still experts, but there are many kinds of problems that they aren't good at solving.
>you can be a fresh junior with much better foundational knowledge than an average expert.
you can. I wouldn't bet on it unless you had exceptional education from other experts AND were extremely self-motivated to keep pushing yourself. Companies spend millions trying to find such students to mixed results, after all. It's not an easy source to find. May not even be worth finding unless you already have billions in capital.
>The ability to apply knowledge is intelligence.
If so, it goes against the sentiment that intelligence is innate. your ability to apply knowledge is a product of your experiences and how/if you can connect them to new concepts. All of that is a product of learning and time.
that's what makes teachers such an important aspect to this "intelligence". A good teacher helps to connect these pathways so knowledge is stored. But traditional education does not allow for teachers to tailor to everyone's own mental map or experiences. Traditionally "intelligent" people in this case just happen to be people compatible with traditional teaching.
(there is also internal self-motivation to learn and practice. But I don't think we disagree that persistence is mechanical practice orthogonal to "intelligence")
there are many problems less than mediocre experts fails to solve that a smart junior can manage to solve
That just means the experts aren't quite as expert as you think they are. If someone's 'expertise' is actively preventing them solving a problem they're applying things they believe or assume, not what they know.
You are putting too much into the word "expert" here. Experts are just people who can reliably solve standard problems in a field, that is the bar. You pay them to be reliable, not to solve novel problems.
Someone who reliably solves standard problems is a journeyman, not an expert. These terms are fairly well established in the study of skill and expertise, and it helps to use them appropriately.
The corollary to learning is teaching, and this isn't addressed substantially in the article. There is a whole industry associated with teaching / training that goes substantially beyond consideration of individual learning styles. E.g. how to structure a course, training objectives and key learning points, how to do assessment of students (during the training itself, or as summary exams / tests), how to give feedback, and many other points. There is also a load of theory around competence retention (how different types of knowledge and skill fall off over time) and the limits of learnability (e.g. the point at which you should stop trying to cram in something that is very forgettable and supplement / replace training with job aides such as checklists).
Awareness of these things can make a big difference between well structured training material and a stream-of-consciousness YouTube 'tutorial'.
For an example of a training system design approach that has been taken to an insane level of detail and organisational complexity, take a look at the UK MOD's version, the Defence Systems Approach to Training.
All three of these authors are amazing Computer Science Education researchers. Obviously, it's a well-cited and well-argued article as well, but I feel like it's worth pointing out that I expected nothing less from them. I actually learned a few things here - I had never heard of the Semantic Wave before. What a great share, thank you very much!
"Experts are not always the best at training beginners."
"To emphasize a specific point: Do not test candidates with brain-teaser puzzles."
"...to get candidates to solve interview problems in a room on their own before presenting the solution, as the added pressure from an interviewer observing or requiring talking while solving it adds to cognitive load and stress in a way that impairs performance..."
> System 1 is fast and driven by recognition, relying upon pattern recognition in long-term memory, while system 2 is slower and focused on reasoning, requiring more processing in working memory.
Interestingly, today, LLMs are augmentation for someone's weak system 1, and allowing them to focus solely on strengthening their system 2. LLMs and popular/cheap/generalizable AI today suck at system 2. So, if you are really good at system 2 and suck at system 1, the next decade is going to be amazing for you.
You need good system 1 to recognize when the LLM is wrong.
But your explanation makes sense, it also helps explain why you see so many post LLM responses they say are correct and proof the LLM can solve the problem, but then the thing they posted is bonkers and wrong. If those people lack a good system 1 it explains all of that, also helps explain which kind of person likes to work with LLMs.
You seem to be evaluating the LLM based on a single response rather than the whole "conversation." The user usually interacts with the LLM through 3-4 different responses to reach the right answer, which is valuable in itself. They're using both systems just as anyone would in a conversation.
I find LLMs useful for:
- Building bridges from familiar concepts to new ones.
- Checking my analysis and implementation for mistakes and gaps. This includes detecting subtle logic errors with static analysis.
- Condensing lengthy descriptions and complex conversations.
- Creating diagrams from verbal descriptions of flows.
- Finding design patterns to support my design, along with the basic structure that fits the chosen pattern.
- Writing unit tests and improving code coverage.
- Analyzing the credibility of information sources such as news stories and scientific studies.
- Generating original ideas and solutions to problems I may not have encountered before.
- Many more edge cases that help me turn an idea into a concrete concept in rapid time.
I have also used LLMs to entirely generate new tools and workflows, using languages I had barely touched before. This improved my knowledge of those languages and sped up my learning through practical examples.
Just as the printing press made calligraphy obsolete, LLMs will eventually make coding obsolete. Coding will be replaced by pseudo code and narrative that is independent of any framework or platform.
This does not mean that design and development will become obsolete, it will just become faster, without being hindered by the unnecessary barrier of coding.
Don't dismiss the value of this tool just because some marketers and regulators are using hype and fear to make money. LLMs can enhance your existing skill and make you more productive. They are not a crutch, they are a third leg.
I don't think a third leg would make it easier to walk if you already have two legs. But it is a good way to see it, some would love a third leg, but I think until it gets better balanced most people will avoid it.
For audio generation I recommend Bark. I am getting 14 seconds of audio that is about a third of eleven labs quality in about 2 minutes.
This is happening on a Windows 10 Dell, with 32gb of RAM, an i5, and an Nvidia 1050 GeForce with 4gb of vram.
I'm also able to decently run local LLMs because of llama.cpp and other libraries that can share models been ram and vram. There are other tools that can help with this as well including Ollama.
I suggest subscribing to r/localLLAMA. I also suggest using Bing Copilot in Edge with allowed access to the page you're viewing. I often use it to find new GitHub libraries and to give me first steps to be able to start using a new framework.
> “Higher capacity enables faster learning, but our unlimited long-term memory removes limitations on how much we could ultimately learn in total.1 Expert programmers may have low or high working memory capacity but it is the contents of their long-term memory that make them experts.”
I’ve always told “kids” that you can learn a lot about systems but with programming and IT systems in general, there is just no substitute for getting the raw mileage of having seen many permutations, iterations, and manifestations. It’s not a dig, but a statement made in the context of encouraging new people to stick with it and not beat themselves up too much when they inevitably get overwhelmed by the scope of their unknowns or roll a critical miss. It’s all about learning, all the time.
The article looks very accurate to me. Having read all of that, I find myself agreeing with most of it, which I think usually would not be the case for me with an article like this.
I was also pleasantly surprised by the quality of this article, especially the nuance around the value of a growth mindset. It's consistent with my experience of having had to work pretty hard to learn programming.
Me and my wife often discuss on how we should help our children to learn. Especially if we see them making errors. Should we tell them that they made an error and show them the correct solution or should we wait to let them notice their error on their own.
The last part about the mindset of the learner gave me an interesting perspective.
The article explains the growth mindset and fixed mindset. The article suggest to nurture a growth mindset by rewarding successes and tolerating failures. Pointing out failures too often might make the learner switch to a fixed mindset.
> Pointing out failures too often might make the learner switch to a fixed mindset.
Why? If they see they can learn to fix those errors doesn't it lead to growth mindset? Growth comes from learning that you can improve, and improvement comes from understanding that you aren't perfect and make errors and then learning to avoid those errors.
People who got coddled and think they don't make errors don't have a growth mindset, they just think they are perfect as they are and there is nothing to improve. That is as close to fixed mindset you can get.
You probably mean that pointing out errors is negative feedback and might make the kid feel bad. But it has little to do with growth or fixed mindset.
You are right. The key point is to only point out the error and let the learner come up with the correct solution by himself. This way they have positive feedback.
The article mentions that learners can switch between growth and fixed mindset if they get frustrated while learning.
>In reality, as we face setbacks and experience failure, people skew toward a fixed mindset because we are not sure where the boundaries of our abilities lie.
"b. Parts of Kahneman's book were undermined by psychology's "replication crisis," which affected some of its findings, but not the idea of system 1 and 2."
Would have been a better footnote if additional references were provided for the latest in that area of discourse.
It seems they have clouflare support, but must be misconfigured somehow? Also rails? 1.4 secs seems like a long response time, even for a rails app. I guess even the pros get websites wrong sometimes :-p
> Research into chess found little or no effect of learning it on other academic and cognitive skills, and the same is true for music instruction and cognitive training.This inability to transfer problem-solving skills is why "brain training" is ineffective for developing general intelligence.
I would disagree with this premise, deep work and the forbidden word "discipline" are problem solving skills that are learned and need constant training. They are just as important as any other specific skill needed for the subject. Thus, making some problem-solving skills indeed free flowing from subject to subject.
Learning any subject in any domain, deeply has value, and that learning is transferable. I think when the analysis is done, saying that chess doesn’t make one smarter per se, the researchers are starting from a baseline where student a and student B are equal in all regards, and in addition to student B has learned to chess. While student a might not know chess they could have spent an equal amount of time learning something else and it’s that time learning things that’s important not necessarily the subject matter.
Another curious feature of human memory is "spreading activation."1 Our memories are stored in interconnected neural pathways. When we try to remember something, we activate a pathway of neurons to access the targeted information. However, activation is not contained within one pathway. Some of the activation energy spreads to other connected pathways, like heat radiating from a hot water pipe. This spreading activation leaves related pathways primed for activation for hours.1
He frames this as negative in the next paragraph, but this sounds like the mechanism by which memory palaces work.
“If you want to judge programming ability, assess programming ability.”
Well there is a problem with that but I will leave it to others to work out what the problem is. (Hint: How do you measure programming ability without, you know some sort of measurement. :)
You test them on programming problems, not "how many piano tuners are there in New York" style questions. Implementing algorithms is still programming so its better than the brain teasers they used before those.
By programming problems do you mean leetcode? Algorithm questions? Should they know sql, or assembly, or maybe JavaScript? Saying they should “test programming” is not saying anything at all. The people who wanted to ask brain teasers wanted them exactly because they thought they were a good measure of programming ability. That someone may think leetcode is what counts as programming questions is just a statement of their opinion about what programming is.
On the topic of how our "Activation" pathways (used when we recall information) stay primed for hours after learning.
This had me wondering on the power of building a "warm up" exercise when we're trying to solve a problem - can we brute force an optimal activation hot paths for better problem solving (obviously it would be highly individual - but presumably such a thing exists given this fact).
It seems you would need a routine per category of problem, but none the less there may be more value than we think in spending 5 minutes just asking/answering some probing questions around the domain in question, before trying to solve the problem.
This is a fantastic piece. If you find yourself thinking “okay, so the brain is different — now what? What should I actually do to learn better?” I wrote about this last year. Pardon the clickbaity title, HackerNoon changed it up on me. https://hackernoon.com/the-four-rs-how-to-become-a-good-prog...
I'd be interested in discussing this with you - we think alike, although, having background in psychology too, I was at first sceptical of your use of terms like "Rearranging". But I like your summary and how you orient it to the programming domain but also reference examples of good learning app techniques.
I am working on an app that makes many decisions in this area, and is truly trying to attempt the psychology of learning ethically to university studying (including parts of computer science you can learn without actually "doing" the coding etc). I have a few ideas and new learning user flows I would love to get your feedback on as well as have a wider discussion and nerd out a bit on psychology studies I think you'll find revealing, would you be up to talk?
As a quick peak: My own thesis on student learning broadly begins with three e's: "Effectiveness, Enjoyment(or motivation), and Environment", as I believe the first two are necessary, and the third one an augmentation, as properties of effective learning systems for people. This is based on combining my psychology knowledge (mostly the first and last two E's) with my experience volunteering and being a TA and hearing from students who through 90% of the semester struggle with the middle e - Enjoyment - more than others, or to the point where it prevents applying the others(not motivated to use effective techniques, not motivated to go to the library when procrastinating, etc) correctly. I am super interested in combining this with the teachers view and how they work too (e.g. curriculum design, personalised tutoring)
I like the paper, and I think it makes many valuable points. I especially like the semantic wave model, because that's exactly how you learn mathematics. And it also explains why (high-quality) abstractions are so important.
But of course everything has to be taken with a grain of salt. For example, their recommendations at the end on how to access papers is not very good. Ever heard of Sci-Hub and VPNs? It is obvious why they cannot mention this in their paper, but it is also equally obvious then that if there was evidence linking race or gender with programming ability, they would not mention it, for pretty much the same reasons.
I also don't like their example of achieving a Nobel Prize as something that practically no one can attain. Yes, that's true, but that is because Nobel Prizes are artificially limited to a few people a year. I think many, many more people can achieve that level of expertise than just a few per year.
It should be noted that the very idea of studying links between "race" (skin color) and intelligence is deeply suspect. No one is studying things like correlation between hair wavyness and intelligence, or penis length and intelligence, etc. And yet skin color is exactly the same kind of trait: something you were born with and that is tied to the genetic baggage of your parents, with no remotely plausible direct link with intelligence (unlike, say, skull size, which at least had some plausible priors). You can even (somewhat) change the color of your skin if you really want.
Now, there may be, at least in principle, families, and by extension populations, who are more or less intelligent, on average, than others. But "race" has little to do with that: these types of studies only look at skin color as a proxy for population, and that is obviously silly on genetic grounds. Dark skin is a dominant trait, the children of a lighter skinned parent and a darker skin parent will usually have darker skin and be assigned the racial category "black" in such research, even though genetically they are just as much a member of the lighter skinned population as the darker skinned one. Even worse, this often persists across a few generations, so a child with 1 dark skinned grand parent and 3 light skinned ones will often be dark skinned themselves, and thus be called "black" in many such studies.
So, the reason you should be very very much concerned with citing studies that find links between "race" and intelligence is that the very premise is wrong in the vast majority of the literature.
They didn't say that there is no correlation between penis length and programming ability, so maybe that is something worth looking into.
I would have found the idea that next token prediction leads to the results it led to deeply suspect and silly as well. In fact, I did, until I tried out ChatGPT. A posteriori, that next token prediction works as well as it does suddenly makes sense.
Race is obviously a difficult term, as about any other term that classifies a human and tries to derive socially and economically important properties from it. Basing it on "black" or "white" is indeed silly, as you rightly point out. I really hate forms at the GP where you have to enter things as Caucasian etc as well, because these groups don't really make much sense biologically either.
> but it is also equally obvious then that if there was evidence linking race or gender
I dunno about "equally" obvious, but what people colloquially refer to as race is an asthetic parent category for a 1000 ethnic population groups, and what such experiments are actually measuring may be a lazy proxy for poverty.
Generally, these are things that teachers learn during their training. Everyone has an education and has an opinion of education yet it doesn’t mean understanding how learning happens. In my opinion, good teachers teach the how of learning too.
Why would it? It just learns by making this kind of text more likely to appear. If you want it to be smart it needs to see problems and strategies to solve them, not see explanations.
Regurgitating explanations isn't useful, following problem solving patterns is. So to learn from this article it would need to see the kind of thinking required to write the article.
Thoughts are generated by brain. You don't control how the neurons are fired in the brain. As far as you are concerened you experience the thoughts, the sounds, images etc which.
When you read something you typically imagine a lot of examples that the thing you read could apply to. The way we train LLMs today doesn't do this, it just reads the text without thinking further about it. That means to make an LLM learn you have to feed it examples instead of descriptions, like examples of problems being solved etc.
Maybe we could make LLM training do such things in the future, but it doesn't do it today and it is hard to do that in practice since generating examples on the fly for descriptions isn't easy to do in an intelligent way. I think that is a core part of generalizing knowledge so probably one of the keys we need to get to AGI.
I really like what you've written. It jibes with my own experience and opinions. If I were to expand it a bit further, I'd point out that there are teaching methodologies that seem to be derived from these paradigms. In one, an apprentice copies the master as well as they are able, sometimes with no explicit instruction at all, until they've filled in the details for themselves; in the other the student memorizes facts and methods, often by rote, until they build up the big picture for themselves.
I've learned within (and from) both approaches, and find each - when rigidly followed - to be highly frustrating! In my own pedagogy I try to hold both in mind, and calibrate students' learning paths accordingly. When they're mired in detail, I re-orient them towards the end goal; when they're not sure what to do, or how to do it, I guide them through the next step. It's a lot more effort, because you have to pay attention to them, and not only the subject, which most teachers would prefer not (or don't know how) to do.
(On a side note, I'll say that - in the field(s?) where I'm an expert - nearly all of the pleasure comes from refining the last 2% of the details. It's never going to be perfect, but it can always be incrementally better. It's not "productive", certainly in a commercial sense, but it's immensely satisfying.)
> Long-term memory is where information is permanently stored and is functionally limitless; in that sense, it functions somewhat like a computer's disk storage.
> a fact does not exist in a binary state of either definitively known or unknown; it can exist in intermediate states. We can forget things we previously knew, and knowledge can be unreliable, especially when recently learned.
I’m sorry, what? If I only vaguely understood physics and believed now that the earth is flat, that would neither count as knowledge in some intermediate state. Knowing is binary—you either know or you don’t, no matter how strongly believe what you think.
Let's take something like Gödel's First Incompleteness Theorem.
I may not know anything about it. I might have heard something vague about it being about "the limits of mathematics". I might know somewhat more specifically that it has to do with statements that are unprovable, but not much more beyond that. I may know that it's about arithmetic. I may know that it's related to the Halting Problem. All of that is possible without remembering the exact wording of the theorem with all its conditions. It would be easy to forget, for example, that Presburger arithmetic is complete and that you need addition and multiplication for incompleteness. Or to remember that Gödel's original theorem requires omega-consistency[0] and that only Rosser's modification makes it work for general consistent theories. And even if you do remember the exact statement of the theorem - would you be able to reconstruct the exact proof (or at least one proof)? If so, in how much detail? Would you remember the exact trick involved in showing that Robinson Arithmetic can describe all mu-recursive functions? And in all of this, are you going to be sure you have no gaps or slight misunderstandings? At which point would you say you "know" Gödel's First Incompleteness Theorem?
[0] Case in point, I actually had to use google to verify that the condition really was called omega-consistency. This was the name I remembered, but I was only about 80% sure of it. And if there's any mistake in my explanations above, then that also wouldn't massively surprise me.
You haven't answered my question: at which point do you say you "know" Gödel's first theorem? Or for an easier example: at which point do you "know" Java? Do you need to understand all the intricacies of the class loader? Or at which point do yoh "know" English? Only when you know every word in the OED?
I think your definition of knowledge is entirely unworkable. It requires you to split every fact into a million little subfacts, it doesn't account for vague recollection, for uncertainty, etc., and it makes it so that basically almost nobody ever knows anything of value.
I think this totally flies in the face of how people actually know things and how people apply their knowledge. The vague and fuzzy understanding that comes with being an expert in a field - IOW having absorbed the principles without remembering the minutiae - is exactly what constitutes valuable knowledge.
This just isn't how the verb "to know" and the noun "knowledge" are commonly used and understood. By your understanding, it is impossible to know anything about the future. Yet, I know my wife will come home in a few hours. Very few people would object to such usage and people would stand by me and agree how unexpected it was if she failed to return. That our knowledge is imperfect surprises few people in everyday usage.
There's much that can be said about how so many words turn out, in fact, to be very poorly understood especially if when using common sense as basis for definition---but the point is, the article is talking about learning from a scientific standpoint, so the colloquial sense of "knowledge" is irrelevant.
An example: What did you eat for dinner? If you know the answer for sure, go back a day, or another day, etc. At some number of days you will have an idea of what you ate but not be entirely sure.
You certainly knew what you ate when you were eating it, but now?
I think we're being overly nitpicky about "knowledge" and "memory". Memory is not binary (with current knowleddge on the topic). I can recall exactly what I had for dinner tonight. I can't precisely recall what I had last week but I have blurry ideas of what, which can be kickstarted by other sources (even if the actual source is wrong, saying "I had seafood last week" can prompt me to realize I went to a seafood restaurant and ordered a burger), or recalled if it was logged somewhere.
However, I can read a log from over a year ago on some dinner and it can still feel unfamliar. Are these 2 states really the same level of "memory" (or lack, thereof)?
>if you cannot fully remember something, then you do not know it.
I guess most people don't "know" how to program then, with them constantly relying on pesky documentation every-time they type a function in an IDE.
People can judge how they want. I'm glad my job doesn't care if I know off the cruft if it's "X.length" or "X.size()" or "len(X)" in Language Y as long as I take a second to google whichever language and library I'm using.
You don't have perfect recollection one day and then no recollection whatsoever the next day. Memories become vague as they deteriorate.
For example, you may remember something you learned in 4th grade, but you probably don't remember exactly how the teacher explained it or what questions about it were on the test.
You don't have knowledge if you don't have perfect recollection. Put in other words, if you cannot fully remember something, then you do not know it. We need to come terms with that and honestly I might be striking a nerve here because I'm triggering people's Dunning-Kruger.
The state of knowing at any point in time is binary, which means that it is possible to un-know something that you used to know.
Turns out I do in fact agree with this explanation; that it’s more about what’s being taught that should dictate the how:
> While learners have preferred styles, effective instruction matches the content, not learning styles. A science class should use graphs to present data rather than verbal descriptions, regardless of visual or auditory learning styles, just like cooking classes should use hands-on practices rather than reading, whether learners prefer a kinesthetic style or not.