Pretty much most senior level/grad algorithm classes are like this. They introduce a framework for solution and then give you about 2-3 example lecture problems that they walk you through. 2-3 weeks after being introduced to 5/6 frameworks its Exam day. You are given unique word problems that, in the eyes of someone experienced like a prof or PhD student, is trivial. But you are struggling with 1 week of homework experience to fit a word problem into one of the frameworks you studied. There just was not enough time dedicated to dissecting how to parse word problem, look for patterns, map it to a framework. It was basically left to the student to figure it out in 1 week based on 3-4 homework problems. On exam day you are just emotionally devastated looking at the problem not knowing where to start - you keep throwing various framework structures on the problem hoping some hints of a solution bounce back at you.
But there are smart people who get all this stuff "easily" (previous practice, exposure). The rest of us walk out of the exam wondering how many trucks hit us.
This is a problem all across engineering. Professors and textbook authors are so concerned about reserving problems to grade you on that they forget to teach the material in the first place.
Imagine trying to learn a spoken language with very little input or speaking practice.
Many professors don't care about teaching. I've had conversations with physics/math/engineering academics which go "Those who are smart and motivated will get it, and I don't care about the rest."
I believe that a lot of this cynicism comes after years of teaching and seeing that most students do not really care about the topic and do not want to put in more than the minimum effort required to pass. You don't realize how easy it is to tell apart those who are interested from those who are not until are on the other side.
Agreed, but part of the teachers' job is to impart that curiosity and interest onto their students. That is hard to do, and only the very best teachers actually manage to do this.
If so, why have teachers at all? Give those freshmen adults a syllabus in the beginning of the semester and conduct exams at the end.
”Those who are smart and motivated will get it, and heck with the rest.”
The system is unfortunately unfair to everyone, in the sense that it's an unfair expectation for people to love and be good simultaneously at teaching and research. I met many people that want to do research (because of passion or because that's what gets you a career) and are bothered by teaching; I've also met people, including myself, who prefer teaching to research, and it's unfair that neither of these groups can express their skill properly.
I predict that in the future there will be more of a separation of careers, especially at the bachelor level, where having an instructor that knows the state of the art of research is not so necessary (although I'd argue that they should have a PhD). This is already happening a bit, say, in the UK, especially in fields like CS where courses are required also in other degrees curricula.
Talking with my partner just now, it seems like some universities in the USA practice this now. Some colleges at Iowa State University seem to have professors that have very low research expectations, and they typically teach the 100 and 200-level classes. Those teachers seemed to have packed lecture halls, since those profs truly care about teaching and don’t have the stress of constantly producing papers from a large lab.
I agree. I was envisioning that after we shipped the people who don't like teaching off to national labs that the universities would just focus on teaching and could pay those people better.
Something like this happened to me this past Fall. I took an undergrad fundamentals of networking course that was honestly fairly surface-level and simple, but the professor put a significant focus on developing the proofs for the course concepts through homework rather than in-class lectures. In theory that's fine, but the issue was that in a time-sensitive setting (i.e. assignments or exams) it became extremely stressful to just not understand where to begin with certain problems. Combine that with the fact that, in my experience, proofs often lead to understanding of a concept rather than the other way around and you had a lot of people going into office hours basically asking for a solution.
I would be very interested to see a study on the relationship between time costs sunk into learning a concept through proofs and students' understanding of the concept as compared to when a proof is simply provided. I'd hypothesize that letting students discover the reasoning behind things on their own would be more likely to develop a deeper understanding, but I question whether there's enough time in a single semester to drive a curriculum that way.
> But there are smart people who get all this stuff "easily" (previous practice, exposure).
Or, simply by virtue of being more intelligent. Life is not fair, and some people do have it easier than others, without having to pay for it with extra study time.
If you are pursuing a research career, and you observe other students putting similar amounts of effort, but obtaining much better results, understanding things more quickly and getting better ideas, you should probably just drop out. Not everyone can be world class athlete, and not everyone can be a world class researcher, and refusing to accept it will often lead to wasted years and wasted potential: many people who are not smart enough to do top research often can achieve great success in eg. corporate world or startups, or achieve higher family goals and do it earlier (most people will be made more happy by their children than by publishing a mediocre paper that nobody will read). Being hell-bent on following a particular dream can bring a lot of misery.
> and you observe other students putting similar amounts of effort, but obtaining much better results, understanding things more quickly and getting better ideas, you should probably just drop out
I'm sorry, but even if this is true under some circumstances, this seems like terrible general advice, and only matters if your only goal is to reach the top echelons of your field. That doesn't mean there is no room for "average" researchers. This is a binary mindset that ignores the value of the discipline and seems like gatekeeping with no apparent benefit.
And struggling relative to your peers can happen for many reasons unrelated to core intelligence.
1. draw a circle around the things you enjoy doing
2. draw a circle around the things that pay well
3. draw a circle around the things you're good at
The intersection is your best career path.
I tried to be a musician. Epic failure. I tried sports. Catastrophic failure. I tried sales. Monumental failure. Persisting in those endeavors as a career is akin to beating my head on a rock.
Hah! When I took the first steps on my career path I wasn’t good at anything, had no idea what I really liked because I hadn’t had the opportunity to try most stuff and the top paying jobs of today didn’t even exist.
I know you are a smart person, but this advice will hold young people back.
Try stuff. Don’t sweat it. You’ll fail and you’ll succeed and you won’t know which is which until years later when you tell a story about your life and force it into a narrative.
That information is a simple Google search away though.
There are also lots of opportunities to pivot besides the default job in your field. Eg, gardening doesn’t sound promising, but call it “landscaping”, run it as a business, and you can earn more than a doctor.
Every child knows what things they like and are good at. It might not map exactly to a career path but it's not like career skills live in some sort of magical vacuum that you can never see until you try them.
So if they enjoy math, they are as likely to enjoy actuarial finance as they are to like research into category theory or machine learning? If they are good at persuading others to side with them in arguments, they are as likely to enjoy patent law, criminal law, marketing, or politics?
There are some common skills, but all of these professions are extraordinarily different and require completely different aspects of those skills, and entirely different additional skills.
I grew up with extremely religious and controlling parents. I spent most of my childhood believing I’d go to hell if I didn’t dedicate my life to the god the church worshipped.
This had a huge impact on what I believed about myself at that age, and I’m still actively working against the indoctrination in my 30s through therapy.
I still wonder how things might have gone if I pursued some form of research. I have evidence now to believe I might have been good at it, but haven’t convinced myself the cost to pivot makes sense at this stage of my life.
My story is not unique, and church/religion are just one implementation of a form of parenting that works hard against your claim.
Liking dinosaurs and Jurassic park is not the same as liking archeology, especially as a job. Most children have no idea what they’d be getting into, and if they like something, it probably has little to nothing to do with the actual work/job/goals.
This is why the 'bring your child to work' days can be so great, they can get a sense of what a real workplace is like, other than some abstract place you go to when you finish education.
I’d argue that this sort of career choice intersection is actually not helpful for most.
Here’s why. For most people, there are plenty of things in this world that they’ve never tried that they’d enjoy the hell out of — especially if they get past an initial learning curve. There are plenty of things that they’ve never committed to long enough to know they have a serious aptitude for it (for example, it took me 5+ years of learning music to realise I’m a brilliant improviser, just because most early music training doesn’t even touch on it).
In my opinion, if someone isn’t sure what career to choose, the solution actually is: go out and try more stuff. A lot more stuff. Do something long enough that you get past the initial learning curve.
Once you’ve collected enough real world experience and data, then you can make an intersection of “what you’re good at”, “what you enjoy” and “what pays” — but you probably won’t even need to, since this is a trivial exercise compared to actually gaining that data in the first place.
No one has time to try all the things. A broad education - which would give everyone experiences and not just academic theory - would offer that, but we don't give people broad educations.
The only people who can afford to explore and dabble are the very rich. Everyone else is on a treadmill from middle school onwards.
This is a good starting point. I suspect some of the disagreements in this comment section stem from the difference in mindset between people for whom classwork either directly applies to or closely resembles their intended career path, and others for whom a lot of classwork (like my statistics courses) appear simply as a rigid gate to gaining access to their best career.
Perhaps a solution to the problem is better societal acceptance of trade schools, but at least in my area a university degree is a surer shot. Even if the degree is just a tool to then go do something completely different.
I also reject the idea that everyone has the same ability. They don't. Everyone is different.
I find when doing a collaborative project, partnering with someone just like me won't work. I need to find someone with complementary skills. Someone strong in the areas I'm weak in, and vice versa.
Agree! In fact I quit a previous job because I felt I was part of a clone army. I'm just not convinced that academic performance is as a strong predictor of ability as our reliance on this indicator suggests.
Sounds good in theory. For many, though, there won't be any intersection (which has any representation in job offerings).
Only persistence and perseverance through failures builds the necessary skills and the right attitude for the thing you need to do. "Career" can be a secondary thing.
Problem is these circles are dynamic and often unknown.
What you’re good at depends on how much you know, practice, and inherent talent. First two changes over time, third one is an unknown variable. You also don’t know when you should give up. Should Einstein have given up science after he couldn’t get a faculty job and had to work in the patent office?
Things that pay well depends on economic situations that can change dramatically. See the cryptocurrency world.
Things you enjoy doing also change. Many stories here show that making a career doing things you enjoy can suck the enjoyment out of it.
same - also good to keep in mind that you don't have much control over category 2, but you can make decisions that'll improve your options on both both 1 and 3.
I feel Arguing on the basis of the scenario provided is participation in a rigged fight.
Intelligence isn’t Unimodal, the smartest people aren’t necessarily the most productive, persistence is its own quality, and in a small percent of scenarios - insight would be the opposite of current practice.
> Jazz musician? Everyone who does it for a living ranges from amazing to genius and being just very good gets you nowhere professionally.
Band leaders and/or singers can be, and quite commonly are, very average, and get regular work. They can be pretty bad and work a lot if they're also very good at marketing themselves. (source: am a jazz musician)
Genuine question: What in your opinion are the differences between a great and an average band leader? I've been making music as hobby for most of my life, and band leading is a topic I think about a lot - but it seems for many musicians I meet is not even on the radar.
Hi. Good question! That could take a book to answer. I meant to just refer to their level of purely musical skill though, thanks for the chance to clarify.
> That doesn't mean there is no room for "average" researchers.
The "average" researchers are doing disservice themselves, the people funding their research, and actually good researchers.
They are doing disservice themselves, because they would have most likely enjoyed much higher success at some other career or lifestyle. An "average" researcher at age 35 is rather poor, his or her life is rather unstable and precarious, family goals are likely unmet, their scientific output is too mediocre to be a source of pride, and generally they are not very happy. Talk to them and you'll find out.
They are failing the funders, who hoped for quality output, but got worthless crap. Finally, they are failing quality researchers, who more and more eschew academia, and instead enter industry, where they can make >3x times as much money putting less effort into something that's typivally less socially valuable than high quality research.
> This is a binary mindset that ignores the value of the discipline and seems like gatekeeping with no apparent benefit.
The benefit is not wasting lives on worthless activities, and getting more useful research output. I do agree, though, that this is not "apparent", and is often hard to see even in hindsight. If you follow science news, however, you will find many people wondering why science productivity has fallen so much in last 50 years, despite funding being greatly increased in absolute terms. One of the reasons here is that we have many more people doing science that simply shouldn't be involved in it.
> And struggling relative to your peers can happen for many reasons unrelated to core intelligence.
Sure, but so what? If you cannot keep up with your peers during graduate studies, why should anyone expect you to keep up when you're actually expected to produce valuable research? Some people might not be able to keep up, because e.g. they spend half of their time caring for their disabled child. This is not their fault, but for the institutions and (usually) taxpayers who pay them, why should they care? Why wouldn't they prefer to instead employ someone who'll be able to spend more of their energy on research, and produce more valuable output?
Yeah, it means that some people through their bad luck will be denied their dream career, but, again, my point is that life isn't fair, and the research careers are not prizes to be handed out to people who deserve them in some moral sense: they are public service.
Not wasting lives on worthless activities? Is a bench player in the nba or someone who bounces between minor and major leagues wasting their life because they’re not Lebron James? Or are they getting paid to do something that they’re pretty good at but certainly not great at? Sorry you can’t score 30 points a night, maybe just go find fulfillment with your family…
You're right about mediocre researchers and their career satisfaction, and
you're right to say that there's too many people pursuing research careers who
shouldn't, but on the one hand you 're saying that some people are talented
and special and on the other hand you're saying that the people who do well in
life are those whose circumstances other than their talent (e.g. caring for a
disabled child) enable them to make a valuable contribution to science.
So, someone who is talented and special, can nevertheless fail to make a
valuable contribution because of their other circumstances. That's not life
being unfair in handing out talent to everyone, that's the organisation of
society being dysfunctional. Who knows what groundbreaking discoveries we are
missing because the next Einstein or the next Darwin is a single mother of two
living on benefits, in some inner city ghetto? If talent is not cultivated, it
never yields fruit.
There's unfairness, and then there's the incompetence and pretentiousness of
people who hold the keys to the treasury. I think the concern is mainly with
the incompetence, and when people complain about unfairness, often it's the
incompetence that really bothers them. After all, you can't do anything about
unfairness, we really can't all be Albert fucking Einstein. But we should
really be able to do a lot more about the incompetence that keeps most people
from reaching their true potential.
Btw, I so disagree with you about what school results tell you about kids'
talents, but that's another discussion I guess.
"I took six undergraduate mathematics courses. Until my junior year I encountered no one who seemed to have a greater aptitude for mathematics than mine. It had always been my easiest subject in high school, and I had been chided there by a classmate for writing my final exam with a fountain pen. Before I discovered physics, trigonometry and differential and integral calculus were my greatest intellectual pleasures. But my final mathematics course as an undergraduate was differential equations and the instructor gave me only a B. I'm sure he recognized that I was competent — I had worked every problem in the book — but he had to give the A's to the obviously brilliant students, who were now closing in on me. If I had decided to become a professional mathematician, as I easily could have, I would have made the traumatic discovery that there were many people my age who were far more talented mathematically than I could ever be.
The world of mathematics and theoretical physics is hierarchical. That was my first exposure to it. There's a limit beyond which one cannot progress. The differences between the limiting abilities of those on successively higher steps of the pyramid are enormous. I have not seen described anywhere the shock a talented man experiences when he finds, late in his academic life, that there are others enormously more talented than he. I have personally seen more tears shed by grown men and women over this discovery than I would have believed possible. Most of those men and women shift to fields where they can compete on more equal terms. The few who choose not to face reality have a difficult time."
Note that I said “you should probably drop out”, not “you need to drop out as soon as you notice others are better”. This shows that there is hardly a way I could have advocated it better, as people like you, or others in this thread (read their responses to me and mine to them) will misread it anyway, and instead argue against a straw man.
Additional reason is that if I hedged my opinion and tried to predict and answer to all possible critiques, the comment would be way too long and nobody would read it anyway, including the pre-written response to the point they made in reply.
The "should probably" is not the issue nor does it exonerate the advice.
It's the recommendation with no inkling of acknowledgement that other factors exist beyond a naive comparison of oneself with their peers.
The ethos of HN is generally to explore the depths of an issue in favor of making sweeping generalizations. This is one of the key things that separates discussion here from other places that shall not be named.
Knowing that many younger students and folks earlier in their careers participate here, that comment wasn't going to remain unanswered for long.
I don't think it would be terribly difficult to reframe what was written to provoke thought about intellectual honesty with oneself without unnecessarily leading others astray or losing all nuance in the process.
>> The ethos of HN is generally to explore the depths of an issue in favor of making sweeping generalizations.
Gosh, talk about sweeping generalisations. I mean, come on, it's not like HN users are some cohort of highly educated philosphers or mathematicians, the majority here spend most of their day battling the newest javascript framework and probably come to HN just to clear their mind of the impossible dreariness of modern-day web programming. Our dang wants this place to be all about intellectual curiosity but most users just want to vent and argue a bit, right?
I learned a phrase in French recently: "se jeter des fleurs aux fesses". Means throwing flowers onto one's bum. It is very evocative, I believe. Let's refrain from doing that. Think of the flowers.
And most people on HN also probably post on reddit anyway so.
Hard disagree on the idea that research should be the exclusive domain of some special tier of “top” research. Certainly there are varying degrees of aptitude for the skills of research. That said, there are infinitely more questions than there are geniuses. Many of these questions are of pressing importance for humanity. I’m very glad there are people working on them who didn't quit because they failed a problem set.
> Or, simply by virtue of being more intelligent. Life is not fair, and some people do have it easier than others
I'm really not convinced this is the case when reaching higher level education.
My experience is that "being smart" is an advantage for the first years - say up to bachelor - at which point there is no magic: if you don't work, you won't get your exams.
This is in part because most exams will test not only your raw understanding, but also your practical experience at solving quickly/intuitively all the little pieces leading to the solution. And that can only be achieved in time if you're comfortable with this mental gymnastic, which comes with practice.
> If you are pursuing a research career, and you observe other students putting similar amounts of effort, but obtaining much better results, understanding things more quickly and getting better ideas, you should probably just drop out
Come on... I am a firm believer that exams test skill sets that are only partially correlated to being actually good at job X - even if X is fundamental research.
Actually you can find people that are really "smarter" that others, who get things noticeably quicker, even in research. Although as you note, being smarter isn't enough at university level.
But in any case they are a too small sample to make the modern science work, and most of the scientists are, well, average people maybe "smarter" than the average people but not by very much. What makes the difference is interest, motivation and opportunities.
And I say that as a former scientist that decided he wasn't smart (and interested) enough but who met many non-scientist smart people in his life
> My experience is that "being smart" is an advantage for the first years - say up to bachelor - at which point there is no magic: if you don't work, you won't get your exams.
Yes, if you don't put in the effort, you won't achieve very much, but this doesn't mean that being smart is not an advantage. It is, both early and late. It is clearly advantageous to both be smart and put in the effort over just putting in the effort. In fact, if you read my comment carefully, you'll observe that I explictly pointed that out, here:
> and you observe other students putting similar amounts of effort
Next,
> I am a firm believer that exams test skill sets that are only partially correlated to being actually good at job X - even if X is fundamental research.
Again, if you read my comment carefully, you'll observe that I never mentioned exams. I discussed
> obtaining much better results, understanding things more quickly and getting better ideas,
which is what you do in actual research, not just exam studying. I would really appreciate it if you read what I say more carefully, and respond to what I'm actually claiming, instead of what you incorrectly believe I am.
Funny thing, as a researcher myself one thing the main thing that makes you successful in my experience is a high frustration threshold. While there are some truly brilliant people, even then run into roadblocks, dead ends etc.. It makes sense you are venturing out into the unknown (that's what research is really), not giving up when getting stuck is what makes people achieve something.
Didn't Einstein say something like the reason he worked out General Relativity first was that he stayed with the problem longer rather than that we was actually smarter than all the others in the field?
And there is Edison's aphorism, something like "Invention is 1% inspiration, 99% perspiration."
There is plenty to do for those who are not the absolute smartest.
Sometimes, or maybe many times even, it boils down to lack of prerequisite knowledge. Most classes simply do not hammer down how important prerequisites are, and how easy it is for students to get stuck simply because they haven't read enough of the required topics.
Of course - if you are a very high-IQ individual, you sheer intellect can get you by. I know such people, that have been able to solve problems without previous knowledge, simply by going at it with logic and pattern recognition.
But if you're average, that's not going to be the case.
I've been a TA, and seen students jump from severely underperforming, to becoming A students - because they finally understood something / had the "Aha!" moment in something which kept them back.
> Or, simply by virtue of being more intelligent. Life is not fair
…
> If you are pursuing a research career, and you observe other students putting similar amounts of effort, but obtaining much better results, understanding things more quickly and getting better ideas, you should probably just drop out.
That seems drastic even without accounting for untold ways people enter any stage of life with disadvantages. I mean, I don’t know what I’m talking about in terms of research, just a fairly demanding professional life for >20 years. But dropping out isn’t the first and only advice I would give anyone who feels they aren’t performing as well as they perceive their expectations or just the performance of their peers.
In your professional life, you are encountering people who are already heavily preselected, most recently by the hiring process, earlier by hiring and promotion processes of their previous companies, yet earlier by graduation requirements of their school, earlier by admission requirements, etc. People who should drop out, have already dropped out long before they met you, so you don't meet those very often, and thus the advice to drop out is not the one that you often give, but in fact this is the advice that most people have to hear before they ever meet you.
I’m certain you’ve misguessed important aspects of my professional life. I’m also certain there are many people I’d love to work with and see have a chance to succeed. Some get this advice before I have a chance to intervene, which is bad. Many just get discouraged before anyone could sway them so blatantly at all, or don’t even have the opportunity to try.
Your point was intelligence, right? How intelligent is it to just categorically exclude people based on their disadvantages? How intelligent is it to presume every systemic thing that came before you was right? How intelligent is it to encourage everyone to think the same way you do on the subject and discourage anyone from being more willing to take chances with people they feel more comfortable mentoring or training than you do?
> and you observe other students putting similar amounts of effort, but obtaining much better results, understanding things more quickly and getting better ideas, you should probably just drop out
Sorry, but that would be just dumb. The starting point matter awful lot. I remember multiple things that were super difficult to me and I was slow, until I learned some missing piece and it clicked. Then I became good.
Initial struggle does not mean anything. There is such a thing as talent, but fixing your preexisting knowledge and calculating exercises will move you up into "it is not an effort to learn further" category awful often.
Yes, of course. Some people will produce valuable research, and others will produce worthless papers nobody will ever read. The society is funding the entire activity, so it has vested interest in ensuring that as much money goes towards the former as possible, and as little as possible towards the latter.
This is to me so obvious, that framing this in terms of “carnival prize” is literally incomprehensible. Research careers are not prizes or sinecures, they are positions of trust and responsibility. Scientists are paid for the value they are expected to produce, and if they produce less value than someone else would have in their position, they shouldn’t be there in the first place.
> Research careers are not prizes or sinecures, they are positions of trust and responsibility. Scientists are paid for the value they are expected to produce, and if they produce less value than someone else would have in their position, they shouldn’t be there in the first place.
And getting into these positions of trust and responsibility should be based on your own definition of "smart enough", correct?
If you read what I said with even a modicum of good will and charity, you'll observe that I recommended that people who don't keep up drop out on their own initiative, instead of being culled by "my own definition of smart enough". I was offering advice, not demanding some explicit policy be introduced and maintained. If you aren't fit to cut it, you'll drop out at some point anyway, and it's best for you if you drop out sooner rather than later: you'll be much happier to have dropped out at 24 than be denied tenure at 3rd tier school at age 38, failing to build wealth and start a family up till that point.
But how then, do you decide when to drop out? Trying to judge yourself based on the study habits of your peers strikes me an exercise in imperfect information. I would suggest talking to the professor who gave you "emotionally draining" questions and trying to determine what fundamental knowledge you may lack. But perhaps you have more relevant experience you can share.
My experience is that I dropped out after getting Master’s degree, after realizing that while I can graduate and then get a postdoc job, I am simply not able to produce as high quality output as some of my colleagues. In hindsight, and especially looking at some of my other colleagues who didn’t drop out but should have, I made a great decision, and I am very happy to have done so.
I think that telling everyone that they can achieve anything they want is doing them a huge disservice. If you talk to actual grad students or post docs, most of them are actually quite unhappy and unsatisfied at their careers. They constantly fell insecure and unstable, and they in fact are so. This is much less common among top ones, because they quite clearly see that they are ahead of others.
In short, if you don’t feel confident that you can succeed in academia/research, leave before the academia chews you out. If you are confident you can succeed, you are probably wrong, but at least you’ll be much healthier psychologically throughout.
This sounds like a very good way to get stuck at a local optimum because everyone is trying to get the next 2% improvement instead of heading for the global optimum which might require backtracking.
Economics for example, isn't lacking Keynes style geniuses, it is lacking pluralism. People are calcified in their stances and teachings and even Keynes said that there is not a lack of new ideas but the opposite, the problem is getting rid of the old ones.
One advantage is making sure that those on the quicker end of the bell curve don't have their time wasted. Not just in classes, but later on with their colleagues: If you know everybody can make the same jumps you can, then you can explain your thought processes without having to jump back and remind people of things. In addition, this helps everybody see each other as colleagues and peers rather than creating a weird situation where some people who are nominally on the same level end up being teachers or tutors rather than peers. It also helps create and set norms around what information people should know or be able to find as opposed to 'John can't search a database' or 'Cindy shows up without knowing first order logic and derails our presentation asking about it every slide'.
Not providing some kind of weeding out/gatekeeping is a great way to disincentivize the talented because then they spend their careers as teachers or hand-holders instead of what they actually want to do + they end up frustrated having to slow down.
Some of the best scientists were and are unconventional late bloomers.
Real science isn’t exam based. I’ve found that those who get the top grades at top schools can utterly fail to develop any novel research. In fact it’s a huge problem. Many grants proposals are largely ‘me too’ derivative works.
This is why grad school hinges not on grades and exams but on actual research.
> Some of the best scientists were and are unconventional late bloomers.
Sure, but that's beside the point. The funding is limited, and, what's even worse, if we spread it too thin, best scientists will leave to make millions on the wall street instead. At some point, a decision must be made as to who gets to proceed in the research career, and who is cut. Sometimes the apparent mediocrities will turn out to be late bloomers, and cutting them would be a net loss.
However, and this is critical, you do not know who will be a late bloomer until they actually are. Late bloomers do not know that either, of course. In practice this means that when it comes to make a decision, you, and every other reasonable person, will pick someone who looks good on paper over someone who looks mediocre, but has a chance of becoming a late bloomer.
Doing otherwise would be reasonable only if mediocrities tended to have on average better output in the long term than people who look good on paper, but that would only cause us to revisit what the metrics of what makes one look good on paper, because they would clearly then be wrong. They aren't: by and large, people who do well in exams tend to do better in actual research than people who suck at exams, even though the exams don't actually resemble actual research much.
In short, when you bring up potential loss of cutting late bloomers, you should not forget about best scientists who never became one, because they were outcompeted by a mediocrity who never became a late bloomer.
> Many grants proposals are largely ‘me too’ derivative works.
Yes, and to me this is a sign that too many people are involved in research. If we suffer from deluge of mediocre, derivative work, we should make the standards higher and put more wood behind fewer arrows, not push more mediocrities into the career.
I disagree. Many phds go into industry which makes technological progress faster.
While Funding for some research is limited but it is in our interest to train more scientists rather than less.
The millions on Wall Street are applied physicists and mathematicians. This is a weird claim to make about being the best. Their research is easily computerized or you spend billions building particle accelerators.
> you observe other students putting similar amounts of effort, but obtaining much better results, understanding things more quickly and getting better ideas, you should probably just drop out.
I had the opposite experience when I went through college. I was the one that was so far ahead of everyone else, and I realized that college was a complete waste of my time and my money (I was working and sold many of my things to pay for college out of my own pocket so I wouldn't take debt). I went and cold-sat three certification exams, started applying for contract jobs, and dropped out of college as soon as I had a reasonable offer.
I had originally intended to stay in academia and do research, but I realized it was going to be an absolute grind and I wouldn't be able to connect with my peers socially or intellectually. I found a place in industry, put myself out of my comfort zone and grew a lot, and I definitely do not regret ditching academia and going into industry.
Experimental technique development and pushing the cutting edge of current state of the art techniques is a phenomena you can see everywhere in human society. It is there in every field of creative endeavor humans engage in, which I would say research is one of. Not only would I say that research isn't special, but I would also say that, just like every other field of creative endeavor that doesn't intrinsically bring a lot of money, those who write the checks (and those who focus on currying their favor) call the shots and sit at the center of influence networks and control the flow and direction of creative production.
It's not that I disagree with your fairly broad conclusions (that intrinsic variation in human capability exists) but I don't know if you are accurately describing intelligence, academics, the corporate world, or any of the things based on how you are interrelating them in your comment.
I take issue with what I see to be a lot of odd presumptions in your comment:
(1) the concept of general human intelligence (which maps cleanly across all capabilities) is only realistic
(1a) it can be be cleanly measured
(1b) it can be used to neatly order disciplines by practitioner intelligence, and
(1c) it is immutable
(2) the bounded class-like environment usably maps performance to a capability to successfully conduct research in an unbounded academic research environment, and
(3) some suitably generic conception of "intelligence based capability" exists which can accurately rank various fields of theoretical or applied research skills
(4) that if you rank these fields of research by the intelligence level of its average practitioners, academic research requires world class intelligence, higher than what is demanded by startups and the corporate world, with starting a family dead last
I think each of these assumptions are very optimistic, and the conclusion, that not everyone is meant to be world class, is trivially correct enough to be empty. Of course not everyone is meant to be world class; that is the definition of world class, that its individuals rank first in the world. It doesn't tell you anything about how people get there, or what you can actually do to influence your ability or that of someone else to get there.
In every one of the fields you actually talked about (athletics, research, startups, corporate world), it's not enough to just have raw biological ingredients for high performance. It is a combination of:
a) raw biological ingredients in the candidate to have capacity for some specific mastery
b) exposure, growing up, to what that specific mastery looks like
c) oversight, tutelage and mentorship by senior practitioners with demonstrated world class specific mastery
d) a cohesive strategy guiding a playbook of tactics (likely designed in combination with c) to keep rate of
skill improvement consistently high enough to efficiently approach
e) intentional, consistent, effective practice by the candidate
f) a pipeline of high value opportunities to develop in the course of pursuing mastery
As with research, these are the raw ingredients necessary for success in any field of creative endeavor humans engage in. The vast majority of these factors do not include the biological "raw intelligence" -- and it's not that it's useless, it's that it is necessary but insufficient. And so you can and will often see practitioners with differentially somewhat less raw chops and somewhat more tenacity or discipline outperform and develop better mastery than practitioners with the converse. Plenty of geniuses failed to achieve anything with that genius besides misery and zero outcomes.
It takes a lot more than just being smart to truly be world-class at anything in 2022.
> Not only would I say that research isn't special, but I would also say that, just like every other field of creative endeavor that doesn't intrinsically bring a lot of money, those who write the checks (and those who focus on currying their favor) call the shots and sit at the center of influence networks and control the flow and direction of creative production.
I definitely agree, but at the same time, those who call the shots did not get there by accident either.
> I take issue with what I see to be a lot of odd presumptions in your comment:
> (1) the concept of general human intelligence (...)
Sure, this concept is not perfect. However, do we have any better alternative than using it, along with past performance, for the purpose of predicting future performance? The answer is, to anyone who pays attention, clearly "no". It is easy to criticize and to repeat platitudes about late bloomers and the like, what is harder is to make an actual hiring decision, when the goal is to maximize quality of the output of the research group. At some point, if you can't cut it, your research career will be over, and if that's to happen, it's really best for you if that happens sooner rather than later.
> (2) the bounded class-like environment usably maps performance to a capability to successfully conduct research in an unbounded academic research environment, and
Where did I said anything about "class-like environment"? If you read my comment carefully, you'll observe that I discussed "obtaining much better results, understanding things more quickly and getting better ideas", not getting higher scores on exams.
It is worth mentioning, though, that people who get higher scores on exams also tend to enjoy better success in an unbounded academic research environment.
> (3) some suitably generic conception of "intelligence based capability" exists which can accurately rank various fields of theoretical or applied research skills
> (4) that if you rank these fields of research by the intelligence level of its average practitioners, academic research requires world class intelligence, higher than what is demanded by startups and the corporate world, with starting a family dead last
You're constructing a complex strawman to respond to a very simple advice of "if you're not cutting it in academia, better quit early rather than later, and don't expect that you can make it through hard effort alone". My advice in no way implies these "odd presumptions".
> In every one of the fields you actually talked about (athletics, research, startups, corporate world), it's not enough to just have raw biological ingredients for high performance. It is a combination of: (...)
Indeed, and if the field is professional football, and you tell someone that "if you're not cutting it professional football, better quit early rather than later, and don't expect that you can make it through hard effort alone", this is in no way controversial, in fact it is rather common sense. When the activity is intellectual, rather than physical, however, people tend to react more emotionally, or allow their ideology to overshadow some hard facts about human existence.
I don't know if your analogies nor concepts are correct, because you aren't really constructing them in such a way that the details line up. You're giving specious advice based on lazy thinking and half baked ideas, which is clear when you look at what parts of the details don't add up. If human capability is generally mutable (which it would need to be for growth to exist), then your explanation doesn't make sense. And human capability is very mutable.
Concretely, why don't we take an example you brought up, which is professional football. Let's look at Tom Brady, regarded as the best quarterback of all time. He was 199th pick in the 2000 draft, and look at where he is now. Shouldn't he have quit when he was 199/254 in the draft? After all, he might have gone pro but he wasn't even in the top half nevermind the top quartile.
Let's take another biological example, which is weight training. You actually have to train yourself to failure to get to the point where you tear your muscles enough that they'll regrow, and then you have to eat a ton to rebuild them properly, and then you have to repeat this over and over again consistently to reach peak fitness. What would you say to someone who said "if you're not cutting it at the gym, better quit early rather than later?" You would tell them the same thing Ronnie Coleman said, which is "Everybody wants to be a bodybuilder, but nobody wants to lift no heavy-ass weights." Considering the brain-body connection, do you really think that there isn't a correlate with intellectual activity?
This is in no way controversial when considered in the field of physical fitness, but for some reason, "when the activity is intellectual, rather than physical, however, people tend to react more emotionally, or allow their ideology to overshadow some hard facts about human existence."
And one of the hardest facts about human existence for people to swallow is that many times, you aren't unsuccessful in the field of your choice because you weren't born with the right raw material. You are unsuccessful because you didn't properly learn the game, butter up the right resource providers to support you, and continuously hone your technique to get to the top no matter what it took. If you did, it wouldn't be a matter of if but when.
Now, football is a well defined game with well defined physical inputs and far less degrees of freedom than other fields. But academia is the final frontier of human knowledge. Is it really the case that you have found a cheat code and one true metric to determine whether a human will reach academic significance or not? Or is it just the case that you have found an explanation which is convenient, simple, and wrong, which you are afraid to let go of?
> After all, he might have gone pro but he wasn't even in the top half nevermind the top quartile.
No, he was deep in the top percentile of players. By the time you even get to NFL draft, you're clearly one of the best ones. Even the last draftee will make millions of dollars. To make your analogy meaningful, instead of comparing top 250 players, compare top 10,000 ones. If you're not in top 10,000 football players, should you stop striving for career in professional football? Hell yeah.
> Let's take another biological example, which is weight training. (...) What would you say to someone who said "if you're not cutting it at the gym, better quit early rather than later?"
This is another false analogy. People who go to the gym do this for their own personal benefit, not to make a career out of it. These are completely different scenarios. I'm not telling people that they should quit learning stuff for their own personal benefit, only to quit dreaming about the research career if they are unlikely to succeed in it.
> Is it really the case that you have found a cheat code and one true metric to determine whether a human will reach academic significance or not?
You really are unable to resist strawmanning, aren't you? Can you quote exactly what made you think I claim that? Can you honestly engage in what I actually say?
Let's go back to the Tom Brady example [1]. The last pick in the draft isn't coming from someone who was already at the top of his game. It's coming from someone who before that point had a history of not being top of the pack both on his high school team and college team. Would you say he's in the top 10,000 football players when he has NEVER been a starter?
```
Brady began his football career as the backup quarterback on the Padres junior varsity team. At first, Brady was not good enough to start on the 0–8 JV team, which had not scored a touchdown all year.[25] Brady ascended to the starting position when the starting quarterback was injured. He became the varsity starter in his junior year and held the position until he graduated.[26]
```
```
Brady played college football at the University of Michigan from 1995 to 1999.[41][42] After redshirting in 1995,[43] Brady spent the next two years as a backup quarterback, while teammate and future NFL quarterback Brian Griese led the 1997 Wolverines to an undefeated season, which was capped by a victory in the Rose Bowl and a share of the national championship.[44] When he initially enrolled at Michigan, Brady was seventh on the depth chart, and he had an intense struggle to get some playing time. At one point, Brady hired a sports psychologist to help him cope with frustration and anxiety; he even considered transferring to the University of California, Berkeley to play for the California Golden Bears.[45][46]
```
With Tom Brady, you have someone who objectively, quantitatively and qualitatively, was not top of the pack in high school or in college or at draft, the formative years where the wheat is physically supposed to separate from the chaff. And yet, he ended up developing into the best player of all time.
More generally, I'd say that it's not a strawman when someone points out where you're not engaging with details and concrete examples someone else is bringing up. It just means that at best, you don't think it's worth your time to engage with the details, or at worst, you just don't think that rigorously.
I'm not here to tell you how to live your life. It's yours to live and not mine. But my unsolicited advice is that you'd probably live a more fulfilling life if you did a better job at reading between the lines and absorbing the detail from those that disagreed with you, rather than defensively taking it as an affront towards your character. I've made clear at multiple points that I agree with a lot of what you said, but disagree with enough of how you get there to voice it over because I think it's based on risky assumptions. I've also cited really specific examples so that we're not arguing on rhetoric, but you refuse to return in kind. Why do you feel above needing to explain with evidence and substantiation, and detail? There is no need to feel shame at being wrong. None of us are perfect. Changing your mind isn't proof of being unintelligent; on the contrary, it's proof that you're awake, aware, learning and growing. The "smartest" and certainly the most successful people do this the most, not the least.
At this point, I've said my piece. If you want to ignore evidence which is inconvenient to your conclusions, then be my guest, but ironically then, you'd be guilty of "when the activity is intellectual, rather than physical, however, people tend to react more emotionally, or allow their ideology to overshadow some hard facts about human existence."
You know it is completely normal to not know things?
According to you we should just let old people do everything because they got to spend more time on this planet and never let inexperienced young people do anything even if that means they never gain experience.
Same thing with passing a football or nailing a technical ice skating routine. You can either do it, or you can't. Practice helps but is not a panacea, unfortunately. Find another occupation, friend. You are owed nothing.
Sometimes this is true. But sometimes the limiting factor really is the student.
In my own experience I've seen fellow students put in less than half the effort I did (I watched them copy homework assignments in groups 30 minutes before class, spend a lot of time trying to track down last years exams, etc etc), and then have the nerve to complain afterwards that the exams were too hard.
I’ve seen fellow students put a quarter of time on studying that I did (they spent the rest studying all the other advanced stuff that I didn’t even have time to look at), and get top scores on exams and effortlessly explain everything to us mere mortals. Life simply is not fair.
No, I find that position rather absurd. I'm not talking about entitlement. I was reacting to a train of thought started two posts up:
> There just was not enough time dedicated to dissecting how to parse word problem, look for patterns, map it to a framework. It was basically left to the student to figure it out in 1 week based on 3-4 homework problems.
My position is that, perhaps, it is counterproductive to tell people who have some trouble in this environment to "find another occupation" based on such a limited measure of performance. We've never defined what the occupation is in this exchange, so perhaps I'm just unintentionally talking past you. I'm certainly biased. The autodidacts I've worked with (who intentionally dropped out of the rigid, unforgiving environment of academia) have been among the biggest contributors on teams I've participated in. In environments more suited to them, they learn just as fast as folks with a more traditional background. In these instances, the occupation was fine. The teacher needed to be swapped out.
> In environments more suited to them, they learn just as fast as folks with a more traditional background. In these instances, the occupation was fine. The teacher needed to be swapped out.
You're assuming it's the teacher's fault. That will be true in some cases, but it may impossible to tell at the time. What will be true in others is that the person (like me) was far more motivated by work than academia, and university at the age of 18 wasn't appropriate.
Yeah dawg! That sounds like basic decency that doesn’t cost much to anyone who’d be harmed. Also it’s a general principle of meritocracy. You can’t merit anything without access to merit it!
Your objection was to equal opportunity to lessons. We’re not talking about operating ice skating rinks for the public. We’re talking about allowing all people to be taught by teachers.
Backup NFL quarterback is a pretty damn elite job. There are what, 50 or 60 of them in the world? Vs dozens of NCAA div 1 starters who graduate every year.
Backup quarterbacks in the NFL is actually the worst possible example you could've given here. How many of those do you think they need relative to the population ? What level of play do you imagine they have to exhibit?
The point is that even in the NFL, there's a wide and very visible gap between the top and bottom levels of quarterback play, and that's taking into account that these players (including the backups) are already considered the best of the best.
I only mentioned the NFL because I was riffing on your comment about throwing footballs, but agree that it's generally a stretched analogy. The extreme scarcity at the top of most professional sports and the resulting lack of utility/career viability outside of the big leagues doesn't compare well to most vocations.
The argument also assumes the only reason someone plays football is to play in the NFL. I invested significant time learning to skate because I enjoy it, not because I think I'll become a professional hockey player.
I did well in Algorithms courses (I took 4-5 and TA’d the intro class). I also studied math on top of CS. Imo a lot of the problem that CS students face in algorithms is due to not having a mathematical background.
Even though you are introduced to concepts like invariants, induction, and propositional logic in a discrete math class, IMO it takes more exposure and practice to get to the point where these concepts get useful in algorithms. In mathematics, and algorithms, a proof is not typically just applying invariants and propositional logic - it also requires more open ended thinking and (usually) applying patterns seen in class/coursework to solve problems. And even if not asked to formally prove correctness or complexity, knowledge of these concepts makes reasoning about algorithms a lot easier.
Most CS students at the point of taking intro algorithms know how to do some basic coding, know data structures, and if not also pursuing mathematics, have knowledge of calculus (useless) and discrete math. Unless the discrete math course was very rigorous and had a lot of coursework, taking Algorithms at that point is like walking into the deep end with floaties on - you might survive, but you’re not really ready.
In Algorithms, like in competitive programming and software interviews, you also need to have a list of all the different data structures, algorithms, and strategies you know well enough that you can ask yourself if X data structure or Y pattern is appropriate for the task. IME this is often not suggested for people studying Algorithms but it should be. Even if you don’t fully get a problem, you can probably get most of the way by identifying the general approach you should use.
There's a lot of babbling about how coding is the limiting factor, but it really isn't. What people struggle with is solving the problems in their head. Writing the actual code after you've done that is almost trivial.
Another angle is how you need to think in abstract terms in the solution space. Most people whose knowledge of algorithms is weak think like: "okay, the problem says the word 'array' so I probably need a quicksort here". People who are successful think like "if this array were sorted, I could search an element in O(lg n) time instead of O(lg n). Can I solve the problem with this additional hypothesis?".
It's almost like being a chess player: when you're weak you come up with moves and hope they work, when you get stronger you look at a position and come up with plans and reasons why certain moves will or will not work.
The top-down mode of thinking is how you write a solution but not how you come up with one, the actual process is more "inside out", you find out stuff thinking about the problem and then you connect the dots.
Part of it is just experience and wit, but much can be taught, and formal mathematics helps the most.
> strategies you know well enough that you can ask yourself if X data structure or Y pattern is appropriate for the task. IME this is often not suggested for people studying Algorithms but it should be. Even if you don’t fully get a problem, you can probably get most of the way by identifying the general approach you should use.
My experience exactly when taking such a course. This type of knowledge would have improved the QoL quite a bit.
I have no formal math background but I did take some propositional logic classes and more than a decade later they remain by far the most useful courses I ever took. I'm really surprised that they're not standard/easily skippable in CS curricula.
This matches my experience of an IT education in university perfectly. 90% of my class just didn't get it at all and the remaining 10% had a lot of experience in the field already (there were a small amount of extremely smart folks as well of course - they needed no background). Kind of defeats the purpose of teaching, but it does serve to weed out the folks that can't pick things up very fast. I think that is one of the goals of high-level schooling.
I agree it’s horrible to discriminate on personality types.
As a counterpoint, though: in software jobs, the ability to pick up things fast is commercially valuable in some roles. As a hiring manager, if school grades helped me identify extremely smart folks who pick up things fast, then those grades are a useful tool.
Not all jobs are the same, however, and so other hiring managers might set a lower bar for grades. But it’s important that we have these tools, in addition to the various other ways we evaluate candidates.
But we have machines for that? Data access and repetition in itself is worthless. Refinement, and gaining new data from old recipes is the actual value. Why construct a machine, that basically selects for the wrong value, creating a reproduction academia, instead of a innovation academia. Why weed out what was your original value proposition? It seems very strange.
Absorbing and applying knowledge fast is a requirement for surviving in academia. Having classes that weed people out is not a bad thing. That's what I was saying - I don't see how this is pure reproduction, this is novel application of existing ideas. That's how innovation can start.
I’ve never been a fast knowledge absorber, but that just meant I required a bit more time on my own, struggling with the content. Still managed to get max grades at undergrad university (didn’t go further because, well money). Also never stopped me from excelling in all work positions so far in life. If anything, a bit of struggle helps you go deep, and deep is much more valuable than fast but shallow (in my opinion).
Rude. I was never below the top 1-2% of students in my classes (quarter of a century ago now, getting old). Depending on when you graduated, I’d hazard a guess that my course work was at least as challenging, if not more so. I don’t really have any insecurities about my intelligence, but I’ve also lived long enough to realise that intelligence is just like being pretty. It’s nice; you get stuff for free, but you didn’t earn it in any way. It doesn’t make me “better” than others, just lucky.
And I’m also old enough to realise that it’s starting to slowly decline and won’t be getting any better. Fluid intelligence that is. Good thing I went deep over my career, because that depth can’t be replaced by the younger set with their superior fluid intelligence. Knowledge, experience, and dare I say it, a bit of wisdom. Struggling a bit to really understand things has served me well in life.
Because they don’t believe in the value proposition they’re selling. Instead of forming students into high quality output they’re taking in students and pushing out anything but already high quality students.
The need to have seen something before a class to have a chance at passing it well is fundamentally wrong.
This describes some of my CS modules experience down to a tee. Overall I had a great time and I look back on the whole experience fondly, but those few modules where nothing clicked, nothing made sense and the examples given never fit the work I had to do...
This was before the days of YouTube where you could look and study up hundreds of videos on the topic to finally find the one that clicks, which is a remarkable thing to have access to.
I'm not clever, but I am resolute, and having an unyielding attitude to obstacles is what has gotten me this far in my career.
> about 2-3 example lecture problems that they walk you through
My thinking on this may be evolving. I used to think that you would learn more figuring out things for yourself. There may be something to that, but now I am thinking that having a more relaxed learning curve with many worked examples (not just a few) will help more students succeed with the course material.
Students who have seen the material before will be bored, but the examples and gradual learning curve are intended for those who are seeing and learning the material for the first time.
In my own experience, I find that many problems require a particular, often subtle or non-obvious, trick or insight to solve. This can result in wasting a huge amount of time until you are fortunate enough to discover it, assuming you actually do. These sorts of problems, especially on exams, are beloved by instructors because they produce modal distributions that make it easy to pick winners and losers and/or sort students into grading bins. They're hated by students because they tend to reward prior knowledge or experience - or sheer luck - rather than effort in the current course.
I wonder if the author is making a mistake trying to put research-like questions into a problem set, even one intended for early doctoral students. (Imagine putting students in competition with each other on research problems - that would seem to be isolating, cruel, and bad for a research group!) I would certainly hope that the exams are more reasonable.
> who get all this stuff "easily" (previous practice, exposure)
I've graduated in Eastern Europe, my experience might be different, but I CONSTANTLY heard my peer bitching&moaning about the materials to be hard.
I found them to be "easy", as in easy to score 7/10 to get to 9/10 you'd have to apply yourself or be smart and to get to 10/10 you'd really have to apply yourself.
Granted, I did have "previous practice" -- meaning I was highly interested in programming and electronics.
There were some infamous professors that made things "hard" for students -- those were the reaaaly boring, ultra-math-heavy-showoff-pricks I'm glad they had a car accident and died.
I still remember I didn't even bother going to the course of such a prick professor after course 3, just repeated the course the next year with a much more down-to-earth professor, and my blood still boils when I think how much fun I had just studying opams mathematical circuits with the second professor's materials and how bad the first prick-professor made us feel with his showoff maths.
I didn't even go to the second professors in-person courses, I could tell from his study book he was a down-to-earth guy and his interest was conveying cool stuff to his students and not how much of math guru he was.
So when things were "hard" it was usually because the professor was an actual showoff prick and not because of an unfortunate missunderstanding on his part about our capabilities as students.
Those pricks know very well what they're doing and how scared students are about them and how many they are failing, they just get off on that stuff.
I found a lot of things needed time to sink in. I would struggle to wrap my head around the exam content and stress myself out. Then the class would finish and I wouldn’t see the content for another semester, where I’d either be helping another student study or it would come up in a related course.
Suddenly, my brain would just “get” it and I’d know exactly how to start.
I forgot where I heard this but "It's impossible to remember what it was like not to know something". This generally means all kinds of things are obvious to someone that knows a topic. They're so obvious they're invisible, forgotten about, and therefore you don't even think to teach them.
I doubt that. If the materials are limited, then by definition there's less to study. You can choose other things to study, but it's a gamble whether it will be useful.
There are different sorts of studying. To some people, studying means downloading the lecture notes, reviewing their own class notes, getting the slide deck from the professor.. To other people, studing means they actually crack open the assigned texts (which many students didn't bother buying in the first place.)
And then to others it means spending going out and building things using concepts they're learning, not because they want to pass the course but because they're obsessed.
I just started as a teacher, I don't know if that kind of obsession can be cultivated in students through a specific environment or if it's something they bring with them into the class. Or if even trying to bottle that lightning alienates less willing students.
Yeah, I had a similar experience. I was bad at some areas no matter how hard I worked, and good at others. To me this seems like solid evidence something like “aptitude” exists. Although, it doesn’t answer questions of nature vs. nurture, whether those different skills result from different experiences earlier in life.
My favorite history professor in college gave open book essay exams.
He asked your opinion on topics, "what were the motivations of the leaders of the US revolution?" Even better were the questions that ended in "why or why not?"
There were no right or wrong answers, if you could justify your answer in a reasonable manner, you got full points.
Some students hated his tests, some students loved his tests. The students who hated his tests really hated them, the students who loved them, really loved them. (I was in the loved them group!)
Getting a Computer Science degree felt awfully similar, especially in higher level classes. Students of course had to do a senior project, 6 months to make something real in a group of 5. One group of students spun their project out commercially and and made some good money off of it.
My group make a photo library management app that was designed to allow for rapidly tagging hundreds to thousands of photos, utilizing custom experimental UI concepts. (This is when Google Picassa was still a big deal) When we started out we didn't know if we'd succeed, after all "experimental UI", "handles libraries with thousands of photos" and "desktop app written in Java" weren't typically phrases that went hand in hand back in the mid 00s.
But we did it! Which kind of summarizes my entire career in software engineering. If I'm not scared and uncertain of how I am going to do something, of if something is even possible, than I probably am not tackling a hard enough problem.
I would hate open-ended questions like that if I didn't know the professor and I didn't know the grading, because it relies on the professor's ability to evaluate what is reasonable justification. I'm sure if the professor was great, this strategy is wonderful, but it also relies on a leap of faith in terms of the student. For example, if I was a black man and the teacher was white, my essays about slavery would be not about if I learned the material but my trying to work around determining how racist a teacher might be.
Replace professor with TA.
But yea, bias on subjective questions is a thing for sure. Racism is probably the big one, though in your scenario, a white student could wonder, if you, a theoretical black student, got a better grade because the professor didn’t want to appear racist. You can drive yourself insane thinking about it.
>I would hate open-ended questions like that if I didn't know the professor and I didn't know the grading, because it relies on the professor's ability to evaluate what is reasonable justification.
After my first stint in college I took about a decade off before returning for graduate school. I made it a policy to research all of my professors before classes began. If they had a book published, I bought it, and read it. It gave me remarkable insight into all of my professors (at least the ones who had published) before class actually began, and made it much easier to navigate the subjective grading minefield that so often exists.
> For example, if I was a black man and the teacher was white, my essays about slavery would be not about if I learned the material but my trying to work around determining how racist a teacher might be.
That would be a valid concern regardless or your or professor’s race.
And even if we take into account that people are more biased towards opinions that elevate them, we should also take into account what an average college history teacher believes in. At that point you should probably be worried that your view isn’t Marxist enough.
Quite a few undergrads come in thinking they've discovered the One True Path and of course it's—sigh—one myopic brand of libertarianism or another. Social science professors get pretty sick of that shit and I'm sure some of those students take the irritated rejection of their tired failing-to-engage-with-the-material crap as "Marxist professors pushing their ideology" but for the professors it's like if physics professors had to deal with every incoming freshman class having a few kids per hundred who are dead certain they've discovered the secret to perpetual motion, are in fact basically experts on it, and are very resistant to learning that they may not have the full picture.
They'd probably also try to temper the enthusiastic incorrectness of freshman coming in as full-on 1930s-throwback "Stalin was awesome, actually" tankies, but there just aren't that many of those. The confidently-wrong freshmen tend to all fit a similar mold, and it's not leftist.
While my Economics courses did offer insight into various schools of economic thought and different teachers had slightly different views on various topics, the basic premise that "Capitalism is awesome" was pretty much treated as axiomatic in every course I took.
Economics stands apart from other social sciences in its culture. It has pivoted quite heavily when it was transformed from political economy into economics during the marginal revolution. And while Marx can be jokingly referred as "a minor post-Ricardian" by some economists, he is considered a father of sociology alongside Comte, Weber, Durkheim and Spencer.
We're not talking about an economics program - that would not likely have a Marxist professor - but rather a history class, which could easily have one.
That's rough, since getting to the point of being able to answer and evaluate answers to those kinds of questions is, like, a really big part of working in the humanities. It's probably the single most important skill for an undergrad in humanities courses to pick up.
I had a teacher that did that in high school, but they were clearly looking for certain answers to open ended questions. An aspect of the class was determining just what kind of answers the grader would grade highly over other equally valid answers.
At least with other kinds of tests, there's much less room for bias or interpretation, even if it still exists.
Faced with such situations, tended to write out what seemed expected, then followed up with other considerations or why it was wrong[1]. Even more so, if the "question" was actually a seven question prompt [2].
[1] Sometimes simply due to the wording of the prompt having an error that should provoke an answer which, given the prior presentation of the topic, can't be what the teacher intended.
[2] Because telling us we're having a 10-question pop quiz that is really a mis-numbered 37-question essay test is an evil up with which we shall not put. (-:
A similar phenomenon I’ve found, is if I know something can be done, I will work at it until it is done. For example, I’ve come up with solutions in interviews that I thought were the best, and then the interviewer says, “can it be done in O(n) time?” At that point, something clicks in my brain that says, “oh, of course it can!” If only I could find a way to tap into that in advance.
The time complexity is real. I always say that I can give you an algorithm in any complexity you want- so make sure you don't give me any lower than is possible, or the garbage I put out won't work.
There is a possibly apocryphal story of Heisenberg and the other German a-bomb scientists basically doing this in captivity upon learning of the success of the Manhattan Project (not building a working bomb, but reasoning out how it was done!)
This is the Farm Hall transcripts, and it really happened. (I don't know if they fully worked out how the bomb worked, but they at least discussed it).
> If I'm not scared and uncertain of how I am going to do something, of if something is even possible, than I probably am not tackling a hard enough problem.
I like that sentence a lot. I'm working on something new at my job that's hard and uncertain, and it's the most fun and motivated I've been in my (fairly short) career as a software engineer. I might keep that idea around as a heuristic.
In my country, the system to access university includes a philosophy exam that works similarly. You are given a text to understand, summarise, discuss, and compare with a different author.
They are testing for reading skills, reasoning and expression, so there is no good answer - knowing the works of different authors just makes it more likely that you'll know some ideas similar to the text at hand, but if you don't happen to know something similar you can, say, get a text by Marx and compare it to Plato, as long as you make a decent argument on the connection.
If you set this expectation up front, and explain the premise, this rules. If you simply present problems to students without this caveat, you are being needlessly cruel.
Part of the difficulty with emotional views of problem sets and tests at universities is not so much that these sorts of problem sets are emotionally trying themselves, but that from the student perspective, going between institutions and class levels, the style, grading and expectations for problem sets and tests can change quite quickly. While I strongly prefer the sorts of problem sets being described, and preferred them as a student once I got used to them, it is likely helpful to have these sorts of explicit clarifications, even in situations where you might assume that students will already be acclimated to a particular style. For students taking unusual pathways through courses, they may not have been around for previous mentions in prior classes, may be taking a class out of their field or usual level, or may be more used to a institution, field, department, or simply instructors, with different expectations.
Going from secondary school or community college to lower division undergraduate classes, upper division undergraduate classes, and graduate classes, depending on circumstances and field, you can, in a few years, go from assignments where you have 60 questions you are expected to be able to answer, and getting 50 right will result in a poor grade or even failing, to assignments where answering 3 out of 5 is considered great. Without context, it can be terrifying to get a problem set where you assume that having trouble with a problem means you don't understand the material well, when in reality it's expected that you won't finish it perfectly even as an excellent student.
In my undergraduate experience, going from a community college, to upper division undergraduate physics courses, to graduate physics courses taken in my senior year, I went, from the finals one term to the midterms of the next, from classes where I walked into tests expecting that I should be able to answer every question perfectly, to classes where my 70% score on the midterm (of a handful of questions) was considered an excellent result, with no explicit indication beforehand that this would be the case. That switch was only a year before I was in a small graduate class where answering two out of four questions correctly on an assignment merited a congratulatory email from the professor. Even rationally having a sense that the expectations and styles were changing, it can be a massive, rapid shift.
I can give you prof perspective on this. Going between different stages of education, you can't really double or triple the amount of time you study, but you need to change your study and thinking techniques to learn 2x or 3x more [1].
The problem is that if the teachers try to set up too easy a ramp at the start of a new stage, many students refuse to upgrade their study/thinking techniques. They continue to use suboptimal methods from their last stage, because its easier. But before they know it, they progress to a point where their suboptimal techniques are wildy unrealistic. At this point, they try to upgrade, except its much harder to upgrade your techniques at MATH301 then it is at MATH101.
One way out of it is to jolt incoming students with assessments where the old techniques simply don't work, and they are forced to upgrade immediately. It is kind of cruel, but it works well.
[1] For instance, a freshman PHY101 Mechanics course covers the same material as 1-1.5 years of high school with higher difficulty.
Could you elaborate on what those "upgrades" look like?
At least for math, I found that by around Calc II I had to start grinding problem sets to do well on exams, and I can't imagine what the next "level" would look like for something like a graduate-level math course.
There is no concise general answer to that question, especially one that covers many fields.
What you are saying is correct - from grade 1 to phd, the best way to learn is to solve problem (sciences) or create art pieces (arts) etc. As you solve problems, there are a few things I can say about how you can upgrade.
* If you are level N, you should only be learning techniques at level N. If you find that you are struggling even a little bit with level N-1 techniques, adopt an immediate no-nonsense attitude about eliminating those confusions. Eg. I have seen far too many students in my second year(!) differential equation struggling with solving with quadratic equations. This means they have to constantly jump between different levels of abstraction (algebra and differential equations) and that makes the question much harder to get right. A few hours of serious review should eliminate any confusion for something that probably took 2 weeks in high school, and probably give them an entire grade bump in differential equations.
* Figure out the meta-techniques at level N and take-off the training wheels. For instance, in graduate level applied math, you are working with a lot of theorems. Something that needs to become part of your study is generating positive and negative examples of each theorem as soon as you encounter it (something the book/prof did for you in undergrad). Nobody is ever going to teach you this at the grad level, as there is very little a grad prof can say to help you learn such meta-skills. But they will hit you with novel theorems in exams (or you will encounter new theorems in research), and you need to have the skill ready.
* Length of each study session and intensity of your study. In high school, a smart student can watch TV and still learn everything for an exam. A undergrad can listen to engaging music while still thinking about their problem on an assignment. A grad student needs the discipline to sit in a quiet room and fully engage with the problem for several hours. A PhD student might need to think about the problem and only the problem from the second they wake up all the way to when they go to sleep. Andrew Wiles might have to lock himself away for 6 years and immerse his whole life and being into solving Fermat's last theorem.
P.S. Not very happy with my answer. I need to chew on this for a few days or weeks.
Jolting people with assessments where old techniques don't work is fine as long as you don't do it first time on exam.
But also, students here just want to know whether they are expected to give detailed reasoning or less detailed one. How many questions they need to answer to get A. Basic stuff about rules.
A lot of courses can indeed be improved by being more legible in expectations. But there is actually a counterpoint. As you make your course more legible past a certain point, there arise the "lawyer" students. They will use the rules laid out to weasel out of work, to to get a grade bump - long and tedious bad faith argumentation.
Its actually the kind of thing you don't see till you actually become a teacher and try to make these sort of changes. The only way out is to have a a minimal amount of illegibility that allows the prof to actually stop these students in their tracks. Time and again I have seen new profs try to be legible and then realize a year later that there is a limit to it.
In general, course design would be so much better if students cooperated. I would have all my exams be take home, open book (but solo work). But what do I do about the 20% of the students who have zero respect for any sort of honor code and even less fear of any consequences? You don't even have to hunt for the existence of these students. On many cheating related threads on HN, a frighteningly people openly and proudly say that they will and have cheated because its only the degree that matters.
I was also at the teachers side. Unclear rules and situations harm especially students that don't tend to complain and argue.
But back to student time, I had class where we were expected to write super detailed explanation on every homework. If you skipped a tiny step, you got marked down. And then came test. I could answer every single question and did not even stopped to think. When time run out, I was still writing. People complained and the response was that we should not be that detailed on test - but that is exactly what they trained us to do.
The op complained about this: "from classes where I walked into tests expecting that I should be able to answer every question perfectly, to classes where my 70% score on the midterm (of a handful of questions) was considered an excellent result, with no explicit indication beforehand that this would be the case. "
This is sort of guesswork about what you are even supposed to do sux.
> I would have all my exams be take home, open book (but solo work). But what do I do about the 20% of the students who have zero respect for any sort of honor code and even less fear of any consequences?
Don't do home exams, imo. Cheaters always, literally always exists. In some cultures they are effectively normalized and more frequent, in some less. Consequences for cheating in American colleges are quite high actually, they can kick you out. They do not want to be kicked out, but there is no way to catch cheater in their home.
The "if you get 3/5 you're doing fine" thing only works if you actually give As for 3/5 answers. Otherwise no amount of placation is going to make students ignore reality.
In a 2nd-year PhD level theory class I'd assume the default grade is an A. The point of classes at that level is to provide background and skills for research, and the teachers trust the students to be doing the work to get what they need out of a class. Some students will really want to master the material, some will only feel they need some familiarity, and the test as to whether they got what they need will come in their PhD research over the next few years, not in an exam.
In a course with "real" grades, it still may well be that 60% is an A; I've had classes like that. In that case, it's just divided so that the top 20% get an A, next 30% get a B, and so on.
I had a graduate algorithms class that was graded like this, but the teacher didn't explain and I was clueless. I spent the whole semester super stressed and convinced I was failing, then was shocked and befuddled by an A. "Emotionally trying" is an accurate description of how it felt.
Props to this professor; struggling in confusion is a lot more fun if you know that's the game you're supposed to be playing.
The extra-credit question on one of my finals in grad school was:
"What is the time-resolved fluorescence of a fluorophore in 4 spatial dimensions?" We had covered 2D and 3D cases in class. my housemate started the question on Friday night (after finishing the regular questions) and was done some time Monday- she said it was pretty hard problem that involved deriving several new equations for 4 spatial dimensions, throw in some tensor calculus, finally some ugly integrals, and then out pops an analytical expression.
I didn't even attempt it because I knew it would be beyond emotionally trying.
Energy arrives on some wavelengths, it departs on others.
"Three spatial dimensions" Vs. "Four spatial dimensions" is suggesting the existence of a further spatial dimension orthogonal to our own and is a bit fanciful, however ...
It's legit to look at the evolution of analytic forms from 2 to 3 to 4 to higher (N) dimensions not just for the fun of abstraction but also for the application to phase spaces [1] .. where systems are described using multiple independent orthoganl dimensions to plot the system state at any time and the transitions from one state to another.
I see no issue with looking at energy transposition through phase space, and there are applications.
Man, I gotta try reading the harder Greg Egan novels again. I'm fine with the biological and even some of the topographical stuff, but when he starts basing character development on actually visualising shifting through orthogonal dimensions like what you're talking about, my brain just exits via my arsehole and I give up.
Greg Egan, Matt Parker, myself, and a slew of others (eg [1]) all attended math and physics classes in what some astronaut called the city of lights and the most isolated city in the world ...
As a WA-native I was aware Egan was from Perth, and though my respect for his desire for privacy always wins out over the cult-of-personality, I'm still kinda chuffed I live in the same remote city as a favourite author.
(And, though I adore the poetry, I believe Perth (to Adelaide, 2131km) is beaten in isolation by Wellington (to Sydney, 2225km).)
I'm stunned and baffled as to what the Nicole Kidman movie could be though; unless you're talking about the character Matthew Parker? I hope you're not also a celebrity Mr/Mrs defrost as I confess I've no idea who you could be :S
seems like they are looking in multiple bands hence the 4 spatial dimensions
"Our approach differs from previous work by combining 3D and spectral super-resolution simultaneously with readily available fluorochromes as well as operating in a wavelength range where biological autofluorescence is minimised."
No, 4D space. Like, you can go forward and back, left and right, up and down, and yimlo and olmiy. We don't have yimlo and olmiy here but I'm sure somebody in 4-space is reading a copy of "Flatland" right now.
The problem wouldn't make sense if you're treating wavelengths as dimensions (and is a far easier problem).
I can't quite put my finger on it at a glance, but my immediate instinct is that this sounds nice at a very superficial / philosophical level, but actually flies in the face of actual pedagogical literature over the last few decades.
For one, you're describing research, not education. The fact that it "can" be done (sub-optimally) in this manner, and the "this is how we've always done it" impetus behind it, doesn't necessarily make it a good idea educationally.
That takes me back to third year applied mathematics in the 1980s.
Small class, seriously smart hard working ederly professor notorious for setting conjectures from whatever areas of interest he currently had as exam questions.
Failure was not even trying to answer, grading was scaled on how many you had a shot at and what kind of dent you made on them in the hour.
My favorite math class (undergrad) was a new course one professor wanted to offer for seniors. He called it Math Seminar, and each week we were handed a set of problems chosen from abstract algebra, real analysis, topology, or number theory. Each student chose a problem and a day to present their solution next week. All class time was spent on presentations and questions to the presenters. IIRC, not finding the solution wasn't a big enough penalty to fail, as long as the professor thought you researched it and tried multiple approaches.
I have fond memories of nights at a local bar, working through a problem.
Failure was not even trying to answer, grading was scaled on how many you had a shot at and what kind of dent you made on them in the hour.
I remember one instructor who told us at the beginning of the course that he would be giving a "you are not expected to finish" exam. The problems wouldn't be difficult, as they were all variations on a theme, but there would be a large number of them. Having an expectation of what I would be facing, I "studied" by practicing my writing speed and mental arithmetic. Even with my writing speed being the bottleneck, I ended up finishing all of them ahead of time, and I still remember the look on his face as I handed it in. I'm pretty sure I didn't get them all correct, but apparently it was enough to severely skew the distribution of results. Later, I heard that he decided to make the exam twice as long in the next instance he taught.
That was a long time ago, before the use of computer-generated exams was common.
Reminds me of my number theory professor. Open book take home exam, 20 problems and you choose 4 of them to answer. He graded us on how much we were able to do, and not just the answer itself. 6 of the problems were equivalent to the Riemann Hypothesis (unbeknownst to the class).
That sounds like an absolute nightmare, to be honest. What does "trying to answer" even mean?
I generally try to think about a problem before actually, you know, writing stuff down. I can obviously think a lot faster than I can write, so writing down potential approaches when I can't quite see a way to advance from them - or for which I already know that they are wrong - is not really viable.
The only way I can see this working is in a "Give X potential approaches for this problem" kind of question. Giving exam questions for where there isn't a reasonably reachable answer derivable from the taught material is quite cruel. Keep that stuff contained to lectures so you can do a class-wide brainstorm session, that's a lot more productive and doesn't, you know, cause psychological harm.
Well, trying to answer means trying to answer: highlight an approach, try to make a dent, show work even if it doesn't lead you to the final result.
> I generally try to think about a problem before actually, you know, writing stuff down. I can obviously think a lot faster than I can write, so writing down potential approaches when I can't quite see a way to advance from them - or for which I already know that they are wrong - is not really viable.
You are thinking about too simple of a problem.
Any moderatly complex problem won't fit in your head. The traditional approach is to find a path towards what you want to prove, highlight the main lemma and start working on them in order if you can but working backward is fine if you find that easier.
> Giving exam questions for where there isn't a reasonably reachable answer derivable from the taught material is quite cruel.
Life isn't fed to you piece by piece nor is it easy. Advanced questions are hard. That's the point.
> Good on SL for setting expectations but I also want to comment a bit. If research is all about being comfortable with uncertainty, we have an adverse selection problem. I went through undergrad with this “comfort with uncertainty, just try things out” mentality and it hurt me.
> Specifically, it hurt my grades, which in turn has hurt my applications to grad school etc. I’ve had to do work post-undergrad to make up for it and demonstrate I am actually a good student. The mindset I had to take on is one where I seek perfect certainty that I know the answer to every problem on every problem set in full so I can score close to 100% to lock in an A. It’s to the point where I was “emotionally drained” from trying to be perfect or near perfect on everything.
> I am sure other students who make it to grad school have a similar mindset, and consequently a similar feeling of emotional fatigue from anxiety. I don’t have a better solution since the grading system is supposed to assess understanding of the material which would be important for grad school. But, it’s an adverse selection problem.
> I agree. The problem is that research requires both technical skill and ability to deal with uncertainty. And one of these is easily measured (grades), while the other is, if anything, mildly decreasing in grades.
So the problem doesn't seem to be the students but use of grading as a system of measuring the progress of students who are on a research-heavy track of study.
Grade inflation exacerbates this problem. The straight A student of today might have been the straight B student of 40 years ago. Meanwhile, the student of 40 years ago that was exceptional in some subjects but mediocre in others can today only get an A for exceptional work, but may get a B for what used to be C level work. Truly exceptional work does not pay in grades anymore, only reliability.
Now, reliability is a very important skill, but it should not be the only thing measured .
That's a good point; I nearly doubled the next highest score on the midterm in a class of mine. I was not even remotely the only student in the class to get an A.
>So the problem doesn't seem to be the students but use of grading as a system of measuring the progress of students who are on a research-heavy track of study
It can be a tricky problem. In more academic fields, I think there's a tendency to not weight grades very heavily in graduate applications, instead focusing on undergraduate research, recommendations, connections to research in your department (obviously, a research group directly wanting a student, and having funding for one, is all but a guarantee of admission), and maybe courses taken. Unlike for undergraduate applications, you'll often have few enough applications after filtering out obviously-unsuitable applicants that you can have discussions about specific individuals, and there will often be enough information about them as individuals that you can make decisions based on that, rather than grades.
But at the same time, at least in the physics programs I was involved in, I got the sense that out of concern for grades potentially affecting futures or discouraging comfort with uncertainty and exploration, once classes were at a point where everyone was probably going to be going to graduate school, many ended up being de facto pass-fail: for the most part, everyone who showed a good understanding of the material would be given an A, and everyone who didn't would get a tap on the shoulder at some point and a suggestion that the professor would do whatever was necessary to let them drop (usually far past the ordinary drop deadline). I also recall that, for example, Kip Thorne simply refused to teach classes that were not pass-fail.
Title suggests (to me) a starkly different attitude in the tweet than what’s actually in it. The quotes are verbatim, but I was expecting a dismissal of the quoted students’ disposition, rather than a professor meaningfully addressing their concerns by making the expectations more emotionally supportive.
I’m not suggesting a change to the title, it’s probably best to keep it. This was a pleasant surprise and one I think other readers might benefit from.
Yes, the first two years in STEM Ph.D. programs are typically coursework-heavy and include qualification exams as part of your courses. You must pass X quals, of which Y must be from this subset.
Because of this, if you leave the program after the first two years, they will often issue a terminal master's degree as a consolation.
Do most in the US start their PhD from a Bachelors degree?
I had a masters degree prior to starting my PhD. It is also possible to start a PhD with a Bachelors, however you wouldn't likely be awarded a masters degree if you failed to complete
>1) Master's degrees are frequently unfunded. PhDs in STEM are fully funded basically as a rule.
PhDs in the humanities and social sciences are also often, or usually, fully funded as well; my sense has always been that for pretty much any academic-career-path field in the US, not being funded for a PhD is essentially an indication that the university doesn't actually want you there. Meanwhile, I think many simply don't offer Master's programs at all.
One aspect of this is that in the US, PhD students can make up a significant part of the teaching staff at the university, with class structures that are heavily built around having large lectures by a professor and PhD students who do quite a bit of the more one-on-one instruction. So departments with many general education requirement classes, like History, can actually end up having a significant amount of funding for PhD students.
Some will continue after a master's. Still, they will usually need to change universities (or at least go through the full selection process again), so if you're interested in a Ph.D., you should apply for that rather than a master's.
For that reason, most master's programs in the US are professional degrees, not research ones.
Also, almost no university advertises or promises a terminal master's degree. It is a very subjective process as well.
At many schools in the US there is an honors program that is separate from Latin honors.
Such programs generally require you to complete x honors classes. Honors classes are either regular classes with additional components or they’re just harder.
For example when I took honors biology long ago, we had an additional lab section where grad students lectured us on their research, and we had extra assignments.
So I graduated with “Honors” based on completing the honors program, and also cum laude based on my GPA.
Oh yeah, but that’s still different than an extra year. More like high school advanced placement programs. It’s the same material, but at a more rigorous level.
Their system is closer to combined bachelors and master’s program.
Honours are grading-related (in Scotland at least) as well. After three years study if you pass but not very well you'll be able to graduate with, say, a "BSc". If you have good enough grades you can do a fourth year and shoot for a "BSc (Hons)". The qualification you get after the fourth year depends on your average grades, either:
- honours of the first class (aka a "first")
- honours of the second class (two levels: upper and middle, aka a "2:1" or a "2:2")
- honours of the third class (aka a "third")
- fail, in which case you get an "ordinary" degree
So when this person is reporting that you needed "first class honours" to go directly to a PHd, they're saying you needed to sit that final fourth year and finish with an average grade of (I think) an "A"
Yes exactly that. I managed first class honours and the only benefit is that I'd be able to start a PHD without finishing a masters.
We had a fourth year as our engineering degree was part of an internal agreement such that it would be globally recognized. The states requires four years so we had to match that. However, if I'd studied in the USA I think I'd have had more than two elective papers over the four years.
First year is general classes (e.g. in economics that would be Micro, Macro, Econometrics, etc), 2nd year is classes more specific to your area of study.
PhD programs in the US and elsewhere are different: most PhD programs in the US are significantly longer, and could be seen as combined Master's and PhD programs, admitting students directly after their undergraduate degrees. The first year or two could be seen as more of a Master's program, and many programs give Master's degrees at some point in the process (some by request, such that the difference between having a Master's and not having one can be whether you decided to file a form or not). There can also be a certain filtering process at some point in the process, with some form of qualifying exams with three possible results of failing, being given a Master's degree but not being permitted to continue in the program, and continuing on to a PhD.
The result is that for many PhD programs, the first year or two transitions from being fully class-based (though often very small classes) to more and more research, and then finally a switch to having no classes, and only research (and teaching).
US undergrad is much more general than most of Europe (especially compared to the UK). So you get to grad education with a much wider but more shallow knowledge base.
> On exam day you are just emotionally devastated looking at the problem not knowing where to start
I honestly understand the frustration,but not the "emotionally devasted" part.
Emotionally devasted to me means losing someone.
When I studied CS I went to the exams knowing very well if I had studied or not and how much, and did not expect anything.
The exam was the easiest part, you either know how to solve the problems or don't and walk away
If something was out of the scope or "impossible knowledge" I would book a personal appointment with the professor to talk about it.
For any too hard to solve exam, there have been more than one that I passed having honestly not spent enough time studying.
Luck is an important factor and usually it levels out in the long run.
also those kinds of failures teached me to focus on the things I could solve instead of those that I couldn't and eventually go back and give them a second shot. It boosts your confidence and helps look at the problems from a different angle.
A failed grade might mean you get kicked out of your program, leaving you with literally tens of thousands of debt with nothing to show with it. If your grade is too low you might lose a scholarship, meaning you have to either suddenly come up with a shitton of money or drop out. A failed grade might mean you can no longer pursue the graduate track you have been working towards for years.
The entire problem here is that some lecturers have a habit of creating tests for which you can't really study or practice. No matter how much time and effort you put into it, they end up being a coin toss of whether you "see" the answer or not.
I have had exams where I spent literally an hour staring at a single question without making any progress. I talked about it with my classmate afterwards, and after only a handful of words I understood exactly how to solve it. If your entire future depends on shit like that, I can totally understand it being "emotionally devastating".
Where are you reading "emotionally devastated" - both the Tweet and the title say they were described as "emotionally trying".
That said, I can easily see someone going through the stress of the exam period, maybe juggling that with part-time work to pay for bills, studying extensively to the brink of burnout and describing feeling "devastated" when they can't crack a few questions in an important exam they thought they'd studied appropriately.
> describing feeling "devastated" when they can't crack a few questions in an important exam they thought they'd studied appropriately
maybe the expectations are too high?
You are a student, it's in the name, you don't know stuff because you are learning.
Maybe I can't understand what you're talking about because I am not American.
School is called school for a reason.
Once you finish school you realize how easy school was compared to real life problems, that sometimes have no easy solution or no solution at all, no matter how hard you try.
If an exam makes you feel emotionally devasted, what will happen when life throws life/death problems at you?
Maybe it's a matter of re-balancing priorities and not putting so much pressure on yourself, life doesn't end for a failed exam.
Maybe it's not exams that are too hard, it's the system that is pressuring students to finish as soon as possible, because it costs them so much, that the risk of going bankrupt it's too high, some form of sunk cost fallacy at play.
OTOH, in that system, who would pay premium rates for a lesser education where exams are so easy that anybody can pass them?
I started working at the age of 19, full time (6 hours a day/ 5days a week), while also studying at university and paying for my bills.
I still believe exams where the easiest part of the process, the bureaucracy it's what was killing me.
And I feel I have to specify that I am in no way a genius, I was just an average student with a soft spot for CS problems, meaning I liked solving them, not that I had it particularly easy.
Ah why didn't you reply to that comment? This just looks like you're misquoting the tweet thread.
> Maybe I can't understand what you're talking about because I am not American.
Why are you assuming I'm American?
As for the rest, I wouldn't bring this up otherwise but since you went there - I started university at seventeen, worked full time in a pub from when I turned 18 and then had to work two jobs in my final year, I've also suffered close relatives dying and a best friend committing suicide. And yet I can still understand that other people could be in circumstances that really push them to describe being "emotionally devastated" without going through all or even some of that. You're showing a (performative?) lack of empathy here.
> the bureaucracy it's what was killing me
"If a university bureacracy was killing you then what will happen life throws life/death problems at you?" - see how silly that sounds?
> Ah why didn't you reply to that comment? This just looks like you're misquoting the tweet thread.
I dind't?
I'm sorry I did not notice.
can the moderators move it in the right place, please?
> Why are you assuming I'm American?
Because it's an easy guess here and it's the system the article talks about.
> And yet I can still understand that other people could be in circumstances that really push them to describe being "emotionally devastated"
I honestly don't.
Like I hear people saying they are emotionally devasted when their soccer team loses a match, but that's not what being emotionally devasted is, that's how some people describe it. It's called hyperbole.
> "If a university bureacracy was killing you then what will happen life throws life/death problems at you?" - see how silly that sounds?
Textbook non sequitur.
I feel the same way about bureacracy now.
But, if I could, I'd still chose bureacracy over life/death problems every time and also I would keep failing very hard exams if I could, studying like crazy, instead.
These threads are often fun because of the variety of ways people were taught things. It’s hard to guess how they compare because most people only end up doing one of the ways and there isn’t much randomisation in education.
Where I studied, problem sets were purely pedagogical and didn’t contribute towards grades. It meant exams could be stressful but the questions in the homework could cover a wider range of difficulty including things that would be fun to have a crack at even if you didn’t know the answer. The struggle of trying to figure it out felt good for understanding the material. We’d review answers and solutions in small groups after attempting the homework. It was pretty normal to get prove-or-disprove or could-one-prove questions but this was mathematics not economics.
A few problems I remember in particular:
- in an introductory course a few weeks in there was a question like ‘find a function R -> R that takes every value on every non empty open interval’ and there’s a standard construction that we didn’t generally know so there were often fun errors or weird constructions.
- ‘find a space X which deformation-retracts to the annulus and to the mobius band’ which I mostly remember because I had some horrid intuitive solution where I took a disk times a circle, embedded it in R3, and then carefully wrote out retracts, but I think the good solution was to just take the product space and then rely on some theorems that I no longer remember
- ‘what’s yellow and equivalent to the axiom of choice?’
Maybe I'm misinterpreting the quote, but it sounds like dunking on 2nd year Phd students?
1. As a professor, you don't get to dictate how something emotionally impacts another human. A frustrating problem set....should in fact be frustrating. no?
2. The students' performance directly reflects the professor's performance
I do like that they are stating their expectations. Seems like the professor is growing as well. Live and learn.
But there are smart people who get all this stuff "easily" (previous practice, exposure). The rest of us walk out of the exam wondering how many trucks hit us.