> The other thing I encountered for the first time at IBM was version control (CVS, unfortunately). Looking back, I find it a bit surprising that not only did I never use version control in any of my classes, but I’d never met any other students who were using version control. My IBM internship was between undergrad and grad school, so I managed to get a B.S. degree without ever using or seeing anyone use version control.
I love this bit. This is extremely true even today. Most students at my university, and all of my university's classes, do not use or understand the benefits of a VCS systems. This is crazy on a different level. I hate to say this but it is in fact 2016 and it should not be a question that git or something should be used on every project no matter how small.
Lots of things surprised me when I first started working as a developer after university, including version control. For me, he biggest surprises were:
- programming is a team effort
- writing (non-code) is important
- software is never done
- few clever algorithms used in day-to-day work
- the complexity comes from aggregation of many simple things, not one complicated thing
What I've never seen shown well in a class is how to maintain software, well, over time. That's something that nothing has tackled. I've got a few ideas for how a class like that should be run, but no way to implement them.
It's also important to teach students how not to prematurely optimize, how to find places to optimize, how to use abstraction, and a lot of other things that just are kind of shown and glossed over.
> It's also important to teach students how not to prematurely optimize
Except if you need a certain level of performance. It would be silly to write a million lines of code to find out that you could never reach the level of performance that was actually required.
But in general you are right.
I'd like to add that perhaps the most difficult aspect of software engineering is a situation in which specs change all the time. This happens a lot during prototyping, and afterwards, when a prototype has been promoted to a product.
If you need to maintain a certain level of performance, and the system has early indications that you won't be able to reach that by continuing with the current implementation, then that's hardly "premature" optimization.
When people use that phrase, they're usually talking about the nit-picking, obfuscation-inducing stuff that'll get you a 3% increase in speed, that you'd generally do at the end of the development cycle.
I've never seen a large project completed without either a comprehensive test suite or prototyping. That's why I think premature optimization is bad. We should be testing as we go.
We also all understand the limits of our hardware and our abstractions we are applying to the project at hand. If we should be considered professionals we need to prove it by being able to reason about our systems. We should be able to tell beforehand what abstraction is appropriate for the project. That is the knowladge that must be taught.
> What I've never seen shown well in a class is how to maintain software, well, over time.
Because you need a long-living project, and such projects are (a) expensive
and (b) rare if you mean them to be thrown away. University is not predisposed
to carry those out, thus it's industry's job to teach programmers how to do
that. Universities are well-suited for teaching different things, which are
very difficult to pick up working in industry alone.
Is there really a way to teach this other than through the school of hard knocks? I'm really don't know, I feel like I've just gotten better at it as time goes on and experience accrues.
With amount of time we spent on Big-O and multiple classes involving algorithms I thought that if I ever had to do this professionally I would spend all day cooking up all sorts of cool algorithms involving recursion and graphs. It turns out that the biggest fight I have in my day job is not that but herding people to pump out readable code that is relatively performant.
I'd wager that most of the students you went to school with didn't self teach themselves programming before they went to University and probably didn't have a Computer Science class in high school.
It is difficult to understand the usefulness of a VCS when the longest project you have worked on amounted to 5 lines of code and you can't produce a fizzbuzz program, let alone understand the difference between a class and an object.
If you were self taught, or have been away from the newbies for too long, you don't always remember what it was like when everything was new. Wrapping your head around how to construct a program that does what you want it to do (or even figuring out what you want a program to do in the first place) is a difficult hurdle to jump. The more extra hurdles you throw in front of them (git, debuggers, and even the compile step) make it harder.
The earlier you require those hurdles to be jumped, the earlier you will filter out students. I believe most people can wrap their head around computational thinking and computer science and I would hate to lose them because of the tools.
You don't need a VCS to understand classes & objects. You don't need a VCS to understand Big-O, you don't need a VCS to understand Trees or Graphs or Maps or Stacks or Queues. You don't need a VCS to understand Computer Science.
And once you know what a Tree is... then I can explain to you how git works.
Things are slowly changing, but it's still heavily dependent on the university. I graduated in 2015, and we're now offering freshmen a "tooling class", which introduces students to some popular workflows.
The curriculum is changing pretty frequently, but git is covered in-depth, along with keeping code on GitHub, pull-requests, etc. Other VCS like svn are discussed briefly. The class also goes over popular IDEs and editors (vim, emacs, sublime...)
The class isn't led by a professor, but by a group of older students that have encountered these things in internships. Maybe that's what makes the difference?
Don't you learn these kind of things from other students? That's how I learned it. They didn't help me. It was more like: I'm using Git for my side project. After that I did as well.
I remember taking a few C++ classes in high school. It would have helped if our teacher had spent even a few minutes explaining what version control is. Their instructions never went beyond "You should make sure you save your work periodically." That brilliant piece of advice immediately yielded the following problems for us:
1) Sometimes after compiling under VisualStudio we would return to look at our source code and see that it had been replaced with Assembly. Teacher's response? "I guess you'll have to write it again."
2) Saving periodically to a single file is about as useful as not saving at all when you have borked your program somehow. Now you get to spend the next few hours commenting out random lines and inserting `cout<<"Here"` lines into the code to see how far along it got.
We in the class eventually decided that the "safest" way to do it was to continually save the files under new names in order to create a poor man's VCS. But that in itself was difficult to remember after more than a few days of working on it, as you'd have to remember which version you wanted to work on: assignment.cpp, Copy of assignment.cpp, assignment.bkup.cpp, assignment(Last version that worked).cpp, etc.
I would be appalled these days if CS classes don't spend at least a single lecture discussing how to use Git.
I disagree, except maybe in a class specifically focused on real-world software engineering. Teaching somebody to use git takes an afternoon, teaching somebody to understand programming and computation takes years.
Git is not a skill, it is a habit. Anyone in computer science should have developed this habit.
This is for many reasons. Version control is not only about controlling versions.
- It's not about saving files, it's about understanding group work flows. Every text project should be in a git repo. If it's code or not it doesn't matter. You can see what things were written for, who wrote them, and even more.
- It's a method to catch cheating. If someone submits one commit called "Working" or something and has no history of the code/project it's likely they are cheating.
- It's decentralized backups and and offline workflow (for git)
- It's a method to collect and aggregate work. Setup a gitlabs/gitbucket for the class and make every student keep their projects up there. The teacher has a centralized and nice looking place to find the code and it doesn't make the students zip up or require them to do anything. Their work is shared by nature.
- It exposes you to what you are going to be using every day for the rest of your life as a CS major.
It's not a question of how useful VCS is. It's a question about what the role of universities is.
In the US/Canada college degrees are generally not directly training you for a career -- that's what internships are for. My position is that this is the right approach. I will have a lifetime to perfect my git skills, but I only had four years to learn about programming language semantics from a recognized expert in the field.
Hopefully your CS degree made you write code and submit code for evaluation. Asking you to write code and not teaching you how to use version control is wasting your time fixing problems with your code that are easily solved when you have version control. Being taught how to use version control leaves you more time to learn about programming language semantics.
Also, a university that gives CS degrees to people who cannot actually work as programmers is doing a disservice to their graduates, even if the university intends for everyone to continue with post graduate studies and go into academia. At the very least, your doctoral advisor will need you to write some code for them, and you better be able to do it.
You can give few neat basic commands of basic git usage but many people just nope out after git add -A . and never learn how to actually use it. It's god awful to actually realize that your peers are not capable or willing to do more.
This is exactly the kind of class I believe should be available to everyone studying software engineering (or similar). Version control is just one of many technologies and skills that most people I know didn't see until after University.
Development process and software engineering should be part of any good course. At my university we had basic, ffirst year courses covering some practical coding, shell programming and various other sorts of usefuk stuff.
I didn't appreciate why version control was so important until later, but at least I knew what it was. Now I consider a department with either no real vcs strategy, or an outdated, cumbersome one to be a huge red flag.
One place I worked used source control, kinda. They had an old system, sccs, and they used to pull together patchsets into zip files, name the zip after the (unique) bug number it addressed and put the zip into the vcs. Here was then a lotus notes database that recorded what the fix was for, what version it applied to, other pre-requisite fixes before this patch... So painful. The graduate software engineers they took on actually developed fear and nervousness of vcs and tried to avoid it as much as possible. That was back in about 2002...
Just stick it into CS 101 (give it an afternoon), and then treat it like a prerequisite. Require that all students use VCS for projects in all other courses.
It's really not hard. Even for most of my non-CS classes I maintained a git repo just for notes.
Even if you set aside the benefit of learning and building a useful real-world skill, mandating git usage will likely save almost every student from a disaster at some point during their studies.
Some of those years of teaching might be less stressful and more productive if the student had appropriate tools and skills.
Version control can act like a safety net that lets you try things and explore with more reassurance that you won't utterly trash your progress so far.
I'm a TA at my university. We teach python in the intro course and from what I can tell my boss is similarly displeased with this reality. We are currently trying to replace IDLE with some other IDE in our curriculum. I've pestered my boss to require PyCharm or a similar IDE but he says it is likely too complicated for beginner programmers.
While I respect their opinion I feel that pushing people into a difficult environment immediately will better their develop their confidence for future situations like this. It's all to often that I encounter fellow students who are "afraid" to attempt to use tools, algorithms, language features, etc because they haven't been trained in their usage.
As someone who taught myself to program that's pure crazy talk.
This is so true and so awful. I ran into the same thing. I had used a debugger and couldn't believe people had resistance to it when it was suggested by our TA. They were like, "Nah, I'll just 'printf' everything." And then the debugger they tried to get us to use was the god-awful command line version of gdb. I seriously think that turned several people off from ever using a debugger again because it was so bad.
Sadly printf debugging still captures something about the investigation process of humans, something debuggers don't handle well at first, until you're used to them and know how to map. It's blurry in my mind but you want to handle sets of breakpoints and custom structure IO/formatting for a debugging session, and debuggers don't offer a way to smooth this out (I'm no expert though, grain of salt required) so printf(...) is still a large reflex.
printfs are usually good. However when you deal with complex realtime multithreaded processes, sometimes high priority thread just pre-empts the printf, so you need to use somekind of flush statements ...this delays overall operations. In such cases debuggers are good 'to some extent'. I still haven't found the best way to debug hard realtime multithreaded applications
Those are exactly the places I find myself relying on printfs. In real-time, multithreaded code, you can't block on a thread and inspect things for a couple minutes and expect it to resume working afterwards.
In 1985 I bought PVCS for my personal projects. It was the second software package I paid for; BRIEF (editor) was the first. One of my CS profs bought a copy of PVCS shortly after I showed him my workflow, and everyone else thought I was crazy for spending money on something so useless.
Same thing with me. I graduated without ever seeing version control. Then again, I recall a professor responding to "How do you debug?" with "I don't need to debug. I mathematically prove me code first, and then just write it once."
It was certainly true when I was at university. I encountered CVS through a friend (which was sadly the best free VCS at the time), but pretty much nobody else used VCS. A few students worked out themselves that they needed something like that, and ended up with the chaotic system of renaming their files with a version number at the end.
It's sad that git (or equivalents) doesn't feature more prominently these day. It's as important a tool as compilers, debuggers and editors (although I don't remember being taught how to use them properly, either).
> It's sad that git (or equivalents) doesn't feature more prominently these day.
I think it does. I was taught version control in University (CVS era but we used SourceUnsafe) but only for one class. These days all new grads I've interviewed have Github profiles and are required to use Github for all projects (especially group projects).
My university has several required CS classes where students must use some VCS (an SVN server is provided), as part of their grade. They're nominally software engineering classes, but really they're about finishing a project in a group setting, usually with a client that is not part of the class. But my school is also less science-oriented than many.
I disagree. VCS use in an early stage of a project can lead to less elegant and coherent systems, because you treat everything as a series of patches from day 1.
The greatest works of humanity were written without a VCS (novels, operas). Using a VCS is only mandatory for multi-person projects.
> because you treat everything as a series of patches from day 1.
That's an option, but not the only option.
For big shared mature projects, I treat each commit (to shared branches at least) as a presentation, where I try to describe exactly what changed, how, why, and to preemptively answer any questions I suspect will come up in the code review, clean up unnecessary or unrelated changes, cross linked to any task tracking items and bugs, the code review itself, any docs I might've updated in tandem, .... the description will often be longer than the actual code changes. With a quick summary to start, so you won't typically need to read the rest, of course. After the review and submit, I'll ping QA with some stuff I think might be worth testing once the build goes green. My commit messages, at least, tend to be better than any of my coworkers.
For solo seed projects, on the other hand, I take 5 seconds to try and describe what it was I did in the past X minutes. I try to remember to commit often enough that this doesn't devolve into "Changed stuff" because I forgot to commit for a day and touched 20 different things, but it happens - making those commits worse than any my coworkers generate, but still better than no VCS. They come in handy for tracking down bugs and such.
> The greatest works of humanity were written without a VCS (novels, operas). Using a VCS is only mandatory for multi-person projects.
Most of these had drafts and earlier revisions (or versions.) I agree that VCS is not mandatory mandatory, but the entire reason I switched to git (from SVN) was that "git init" is super low friction for getting new projects under VCS. That it handles branches way better and enables distributed workflows are mere side benefits.
The greatest works of humanity were written without a VCS (novels, operas). Using a VCS is only mandatory for multi-person projects.
Code is different. There are many times that I have been developing and I do something to my code that causes mystery behavior, and I don't catch it until perhaps days later. Being able to walk backward in time and see when the problem first occurred is invaluable.
I used to really like using Visual Source Safe. It was so easy to set up and integrated directly with Visual Studio, there was no excuse not to have a VCS. Too bad it had a reputation for garbling itself (something I never had happen as a single dev).
This is true, but being able to ask the computer what changed is much more powerful than trying to remember what you did last night and how that's different from the version you had yesterday morning.
Being able to write code that does something in the first place is the most important.
Having to learn a VCS at the same time you are learning the difference between a parameter and an argument will cause more harm than good for a novice.
On the other hand, being free to experiment, screw up and revert back quickly is great for someone just learning.
I wouldn't recommend it to someone at the hello world level, but would not long after. The industry is plagued by people that don't know how to use source control.
I disagree with your disagreement. I always use version control on everything I work on. I don't think I've _ever_ rolled back to a previous version, nor have I ever even looked at the history of something that I worked on alone. However, knowing that I _can_ has made me a lot less cautious about changing something that's "already working", and that's a good thing.
VCS is an editing tool (in the sense of literary works). I can safely delete entire files, change the code base in crazy ways, etc -- all the things you do in an early stage project -- without worrying I might lose something.
I don't even understand "treat everything as a series of patches" as that's not really a VCS workflow unless you are Linus.
I'd say then - don't consciously worry about it. But it'd be great to set something up that auto-commits your code every X minutes or X character changes.
I believe the greatest works humanity has ever written are the massive software suites that enable our amazing technological civilization, so you are wrong.
What you say is like you are surprised because there were no smartphones 15 years ago. For me it is also unbelievable working without source control system but just 10-15 years ago nobody but large companies cared about it. Time changes and flies...
I once chatted with a professor whose class I had already taken, and he told me that for the current iteration of the class they were using git as a submission system for programming assignments. Apparently more than half of the students failed the first assignment because they couldn't figure out git basics.
Personally, I think it would be a more valuable learning experience for the students if git basics were intentionally not taught in a class but still required.
As a professional developer, you're going to have to figure out how to do whatever it is you don't already know how to do all the time. For those that haven't already cultivated this skill, university should be a place that pushes you to learn that lesson.
"I was hesitant about taking the job because the interview was the easiest interview I had at any company, which made me wonder if they had a low hiring bar, and therefore relatively weak engineers. It turns out that, on average, that’s the best group of people I’ve ever worked with."
Maybe / quite probably they weren't filtering out ton of false negatives.
The best people I've ever hired all looked the same:
1. Had a Github with actual code in it.
2. The code was clean, lightly commented, and brief without being terse; regardless of the paradigm.
3. They breezed through the interview because the questions tested their ability to actually build the thing they said they could build. No silly games. A light question that touches on data structures and O time, but nothing you'd need to crack open your old college books over.
I feel bad for people starting out, but senior / intermediate devs are just such a better deal. You pay double and you get about four or five times the work.
In my experience, this holds true as well. I would say though, that you have to keep an open mind when interviewing people just starting out or at that junior level cause there are some really talented, yet raw, developers that you can miss out on otherwise. This is of course assuming you have the bandwidth to help mentor these developers and/or you are okay with less output for the first 6-9 months while they begin to grok what is needed for the job.
Overall, I've been bummed lately because I've interviewed CS majors and they have no/poor personal website, basic CS code in Github, and overall don't seem to have that passion that other non CS majors do have. I don't know if it's college that isn't preparing them well enough, but having a personal project or two in Github alongside a decent personal website goes a long way to at least indicate you actually enjoy programming.
Or perhaps it's because not everyone wants to be a web dev.
Perhaps all of their code at previous gigs has been proprietary and they don't relish the prospect of doing a full days work and then coming home to spend their evenings maintaining a GitHub profile in case a future employer has it as an unspoken requirement for the position.
Indeed. We can't have portfolios of our actual work, because it's proprietary. So we end up with a pseudo-requirement for sample projects to be done on our own time. Which have to be kept (reasonably) continuously updated. Fortunately this has only cost me one interview so far and not made it genuinely hard to find a job while having a job.
I suppose I could try citing my 22k karma on electronics.stackexchange, but I'm not sure that has the same level of name recognition.
I would mention it. If anything it at least shows a sense of community and as an interviewer I would make the assumption that you enjoy answering questions so would likely make a helpful, pleasant teammate. I'd even ask a few questions around your approach to answering questions etc.
You can include your membership in various technical communities in your CV/resume. People list ACM/IEEE memberships, participation in mentorship programs, etc. No reason not to include online communities (short a desire for privacy, once you give out your handle it's one google search away to everything else you have online).
I've also encountered employers that prohibit employees from contributing to outside projects, partly due to the possibility that you might re-use some code between work and personal projects (even if unintentionally).
Yes, for this reason and many others, filtering on "has a Github" is potentially problematic. I know tons of great developers who don't use Github, let alone have a carefully crafted "profile".
Sorry, should've specified. These were explicitly for front end developers, not full stack or back-end. I hardly would expect a back end developer to have a decent website, as that's not their forte. However, it's great if you can pass a front end coding interview but if you don't have any personal projects you can speak about, even something as small as a simple jQuery tic tac toe game or anything of the matter it's harder to know if you just studied the questions or actually understand the design patterns.
Granted I'm not going to fail someone on an interview who appears competent, can explain Frontend technologies well and answers questions correctly. It's just more comforting to have both. As we all know, theory doesn't equal real life in many situations.
Yeah, I've learned I suck at managing / growing junior people. I want to be good at it, but I'm not patient enough and I don't like sitting on people. I'm good at the teaching part, I've taught two girlfriends how to code to the point where they are both earning serious money coding. And I'm good at the managing part, I could bring projects to completion that I was proud of and usually on time. But I can't seem to do these two things at the same time.
And really, I have zero financial motive to learn how to do so because while you might get the rare wiz kid, usually you don't.
> I feel bad for people starting out, but senior / intermediate devs are just such a better deal. You pay double and you get about four or five times the work.
That's a reason to feel bad for the senior devs :-). The people starting out will get the job because they look cheap to managers/shops that want to throw mythical man months at a problem, so they get a foot in that way.
I've only met a couple people from the old Centaur group, but they were both excellent engineers. I would love to learn how they actually hire. It's possible that the author found it the easiest at least partly because it was closest to his level of expertise; he was already more comfortable at the software/hardware border than most, judging by the experiences listed prior to that point.
Amazing little essay. I love his intellectual honesty, and his lucid view of his career path. While he claims to have no takeway, I think one obvious one is that, barring disaster, you'll eventually end up where you belong no matter what path you take. You'll leave the bad jobs (maybe later than you should) and you'll get positive reinforcement for things you do well. Every time I've had "career envy", I've tried the alternate path only to discover why I'm really where I currently belong. But I'm glad for those alternate paths, as they gave me that reinforcement I needed.
Crazy also how similar our path to learning programming was. Looking back, all those hours of fiddling with BASIC in high school, or with Pascal at school really didn't teach us all that much. Like him, my internships mostly taught me meta-lessons rather than actually valuable skills. Like him, in my education I followed the path of least resistance, or rather the path of "most options left to explore since I have no clue what I want to do", and I feel I'm lucky I ended up where I did. Like him, I fell in love with math way late when I finally saw it wasn't about rote application of arbitrary techniques (abstract algebra is what opened my eyes) like we're taught in school.
This sentence, buried deep in the essay, is brilliant: "a common failure mode is that you’re given work that’s a bad fit, and then maybe you don’t do a great job because the work is a bad fit. If you ask for something that’s a better fit, that’s refused (why should you be rewarded with doing something you want when you’re not doing good work, instead you should be punished by having to do more of this thing you don’t like), which causes a spiral that ends in the person leaving or getting fired."
This is a wonderful essay that 1) shows its messy work, 2) outlines the deep reward of metacognition.
I have a very smart friend who is actually a great writer. But if she doesn't nail the first draft, she gives up saying, "I suck at writing". Writing is rewriting. And similarly, good learning is having grasp at the engine at your disposal through metacognition.
> In retrospect, I should have taken the intro classes, but I didn’t, which left me with huge holes in my knowledge that I didn’t really fill in for nearly a decade
After joining industry immediately after high school, I dismissed the value of formal education and selected a major with little consideration (I was unaware that CS even existed). I've recently joined a team where all my colleagues, many who received their degrees from prestigious institutions, majored in C.S. and imposter syndrome haunts me. Many though, are surprised,
Although I have my B.S. in information systems, I'm debating whether to return to academia -- obtain a second B.S. in CS or a stretch for a masters in CS (self study the fundamentals, which I'm currently doing, prior to starting the program) -- or continue the self study route to fill the missing gaps in my knowledge.
You’ll get over it. However, you will forever be cursed with having to keep your wiz-kid credentials up to date as you will never have that degree to help wedge the door open for your next opportunity. Ultimately, if programming is what you do (yes, I know CS is not just about coding) it doesn’t really matter, because the rubber meets the road at some point, and either you can code, or you can’t.
Absolutely, but when you enter that room for an interview, there may be a bias already built up against you. Somehow you made it through the door, but now you have to overcome the skeptics, maybe a team member who refuses to believe anyone without a formal degree could possibly have the proper foundation for “real work”. You can’t be just good, you have to be outstanding. Even if you shine in the interview, you may still have to make it through an upper manager who did not bother to interview you. Twice I’ve been rejected because the upper management didn’t want anything to do with me, even though the technical team was excited. I was never told exactly why, but I suspected the lack of formal education (what else?). In one case I was offered the position about a month later but already had another job.
I'm also an EE starting my PhD doing research in hardware/software co-design and circuit design. I'm also heavily interested in CS in general, so I'm planning to take as many CS courses as possible.
One thing I'm trying to do is keep myself current when it comes to software development. I think it's good to have a backup skill that I can fall back on in case my PhD doesn't work out.
It's good to know that someone like Dan Luu went through ups and downs before getting to where he is now.
Very interesting read. Especially for someone like me who is on the verge of completing his undergraduate education. I have always feared that life might be too boring in the industry. This provides some fresh perspective. Also thank you for pointing out the downside of reading 50 Manning books. I wish wherever I end up working I manage to carve out time to continue reading and educating myself like you did.
I was reading the "bad stuff" section and I stumbled upon what he said about the Joel Test:
The Joel Test is now considered to be obsolete because it awards points for things like "Do you have testers?" and "Do you fix bugs before writing new code?", which aren’t considered best practices by most devs today.
Can anyone explain why having testers isn't considered a best practice by most devs today?
>>Devs should be testing their own stuff too, of course.
Only to check that it's actually working before sending it off to the testers. Unit tests can be a good way to do this.
Make no mistake though: a good tester is worth their weight in gold because they will find many "real life" edge cases the developer would not have thought about, as well as UX issues if the feature being tested is user-facing.
Not sure, but I believe that part of XP is having devs do more/all of their own testing and getting rid of the developer vs. tester roles (note that this doesn't necessarily mean getting rid of QA), and this aspect of XP is increasingly common.
I probably would be disinclined to hire a developer who wanted a different person to write/run their tests, unless it was for a very specialized role and said developer had exceptional talents.
> to hire a developer who wanted a different person to write/run their tests
That's not the role of the tester. The developer should be writing and running their own unit tests. The Tester should be running through user scenarios, use cases, and basically be pretending to be a user of the software to make sure that everything works and makes sense in a workflow (not just in their unit components)
So, it sounds like the only real change is that we don't call QA people "testers" anymore and have a fancier name for them? Kind of like it's gauche to call yourself a programmer nowadays and we're software development engineers instead?
My programming story: I started with BASIC on an old pocket computer. I typed in games from a manual without really understanding how the code worked. In eighth grade, I programmed a bit in QBASIC in an informal computer class. Then, for years, I didn't program much except for a formula on my TI-82. I never learned the TI-82 programming language well enough to do anything more complex.
Towards the end of high school, I got Internet access and learned HTML to build a web page. I took AP Computer Science (in C++) in my senior year and scored 5 on the exam. I majored in computer science at a top math/science university that isn't as well-known for CS as some other schools. I turned down some of the higher ranked CS schools, which was probably a mistake, especially as I wasn't even that interested in the other sciences. I'm more into languages, and I was interested in CS because of its creative and entrepreneurial potential.
Fast forward to now: I started programming early and majored in CS, and I can write classes, objects, and functions, but I still don't really "get" programming. My algorithms course in college was all math and proofs that I didn't really understand. I've since gone through algorithm MOOCs and implemented some algorithms, but I still can't really apply them. My work involves some programming, but more of it is Linux administration. (I also don't really get how to deal with hardware because of my problems with anything physical.)
> Tavish Armstrong has a great document where he describes
> how and when he learned the programming skills he has. I
> like this idea because I've found that the paths that
> people take to get into programming are much more varied
> than stereotypes give credit for, and I think it's useful
> to see that there are many possible paths into programming.
Okay. Yes that's useful. Let's see what path you took...
> Luckily, the internet was relatively young [...]
There ya go. Look no further!
Mine own story is a little different... The author had local peers, I did not. The author made no mention of an old hand-me-down c64 nor Tandy 1000's in his kindergarten classroom...
But, getting online in the mid-nineties? Check. That's huge.
TCP/IP was explained to me by some random gamer in a chatroom, long before I ever thought to "google" it.
(Speaking of paths... Lycos --> Altavista --> Still Altavista for a long time as I resisted the change to Google --> Google --> DDG)
One way to learn to program is the urge to teach the machine some principle you might find nice to be realised by a machine. Some things are opposed to entropy, some things are there to create order (always with a net gain in entropy, but nonetheless). Therefore, the machines and us seem to be on the same side of nature. Even the whole ecosystem of the planet is a miraculous emergence of high level structure.
So you learn to program, if you want to program some. Print that on the next calendar.
TL;DR
Naturally talented, hardworking individual downplays everything he has ever done and tries to chock it all up to chance. I found this all very abrasive.
Also, don't forget: The non-talented, non-hardworking individuals who luck into succeeding tend to never shut up about how it was all about talent and hard work.
Well, you sort of have to, here on the internet. HN is one of the better forums, but even here, I suspect you'd be (more politely) torn to pieces if there was more than a hint of ego in your writing - especially if you're writing about your own life.
I'm confused by this comment, where in the article did you get the impression he was naturally talented and hardworking? For most of his academic career in high school and undergraduate, it seems like he had a difficult time learning programming, and doing well academically.
How did you not arrive at that conclusion? I guess it's normal to complete a year long Calculus sequence less than half that? Sure he says numerous time he didn't find school engaging and struggled in some ways. I for one have heard of this before where someone with a strong aptitude struggles with conventional education. Without something special, be it aptitude or hard working nature how do you suppose he got into graduate school? Luck, again? Also, did you notice he was only an undergrad for 3 years and apparently double majored.
True enough. Given the length of the post and how prevalent the theme of "luck" appears I still find it obnoxious. No doubt there is so much to learn in and around computers that everyone should feel humbled, but this guy should also recognize the mere mortals around him struggling even more so.
Your mileage may vary, but as a mere mortal I personally find it much more reassuring when the role of luck is recognized.
Too many narratives are akin to:
"Well, I worked very hard every day without stop and tho I struggled at first - with how hard I was working - eventually the fruits of my labour yielded this unbroken string of successes thereby reaping my just deserts"
My experience in school seems similar except I didn't get nice math courses until very late on and I was miserably bad at the memorization-based Calculus work (and I still am, but I can do abstract math very well). By then I was pretty disillusioned by school in general.
His luck probably helped quite a lot. Nobody really knows what they are good at or what they will enjoy until they do it. Even if it's 'harder' or 'easier'. Having a good mentor is difficult as well; they have to know their stuff, want to teach and mesh well with you.
> no one’s going to stop you from spending time reading at work or spending time learning
What? You've lived a truly blessed life, Dan Luu. I've observed the opposite, pretty consistently. I've been working as a programmer for 25 years and I've found, across nine separate employers (and lost-track-of-how-many different supervisors) that spending any appreciable time reading (even a book about Angular when you're supposed to be learning Angular) will become a management issue. Everywhere I've ever worked has expected learning to be on your own time. Don't believe me? Put "read three chapters of a book" on a status report and see how many levels of management show up at your desk to micromanage your time.
Yep, that's pretty much how I spin it. But make no mistake, I have to "spin" the fact that I'm reading about programming while working as a programmer.
Weird. I have found this to basically never be the case. Even some of the worst managers I've had have supported learning on the clock, through reading materials, tutorials, or otherwise. I've even had some jobs where learning time was a regularly slated part of my work week, and in some cases was allowed to take classes which cut into work a bit. My current employer has a library of technical books and educational material that any of us are free to rent and learn as we go.
Sounds nice. I brought up the concern that learning on the job was frowned upon and was told "that is just keeping yourself marketable, you do that on your own time".
I have a love hate relationship with the place, it's rarely black and white. I get to work on really cool projects and stay in an incredibly low cost of living area.... but the internet access is awful and I don't get much time to improve myself.
A standing directive that I've got is "get your work done; other than that, I don't care how you spend your time". I regularly put something like "researched current code-signing best practice" into my status reports (and have for the past 8 years, when appropriate).
My time at home is mine. If I'm reading a technology book, it's usually something I'm curious about and don't have a direct use for in my professional life.
I'll consider myself very lucky then. At my firm we're strongly encouraged to take 4 hours per week to dedicate to learning something new or bettering our existing skill set. You simply let your team lead know your current objectives for the month and we're given access to pluralsight/codeschool etc. I honestly didn't know how much fun work could be while still being work until I started here.
Does the same hold true if you put replace "read three..." with "research"? Perhaps it's specific to my industry (data), but I spend a considerable amount of time reading about new techniques and approaches to make our codestack more efficient.
That's not my experience at least in a "DevOps" role. In fact a large portion of our time is spent reading/learning documentation, and random websites. Getting punished for learning on the job, would immediately lead to me looking for a new job.
I'm quite surprised by this since most programming jobs will involve solving problems you don't know how to solve on a regular basis. Reading, whether it's books, blogs, Stack Overflow are ways to help you solve problems.
If you are a JavaScript programmer and spend the work day reading about Elixir; that is a problem. It likely doesn't apply to work so that should be on your own time.
> since most programming jobs will involve solving problems you don't know how to solve on a regular basis
Yes, you'd think that would be blindingly, painfully obvious to anybody with the cognitive ability to tie their own shoe laces. And it probably is, to everybody except the MBA "efficiency experts" that are taking over my profession.
Places I've worked have had their own mini-libraries. I have some kind of documentation loaded up on at least one of my monitors at pretty much all times.
I don't spend significant amounts of time doing nothing but research however. The problem might not be "read three chapters of a book" on a status report, it might be having only that on a status report.
The page is very beautiful in its versatility. No fancy JS hacks to keep pages style: you can use Firefox' reader view for instance. I keep my browser windows around 1000x1000 pixels because the vast majority of sites would otherwise style me overlength lines of text.
And it's fine, as long they scale properly and allow narrow browser views. The only criminals are the ones that force wide paragraphs... sometimes to a point I have to vertically scroll a paragraph.
I completely agree. The advantages of sites like these is that just 3 lines of code is enough to tweak it to my preferences. I doubt it would be so simple for sites with mountains of existing CSS.
It's also worth noting that Firefox's "reader" mode works perfectly with this site.
I'm constantly annoyed by non full-width sites. This article is easy to adapt to a smaller window, but you can never scale up artificially constrained sites
There really should be a command for this in the browser's View menu.
(I'm sure there's a million add-ons for this... But these days I try hard to avoid customizing my setup, because so often I have to use another computer anyway. Learning to live with defaults makes life so much easier in the long run.)
Nice, I think I'll have to try Firefox again after a long break. (The prospect feels not unlike going out with your high school girlfriend after you've both been married and divorced separately.)
Understood, I upvoted your message. The different preferences between sites -- and tabs -- is the biggest nuisance for me too, but not too big a nuisance.
Out of curiosity, do you use OSX? I am mainly a Windows user, which lends itself very well to full-sized windows. But whenever I use Mac, I find myself using smaller windows. Might also be related to resolution/retina displays.
Linux, with a tiling window manager. :-) You could say that it favors strongly both cases: running windows full-screen and also running a 70% window with say two 15% windows.
Windows 10, which I use in work, has this annoying habit that if I maximize browser for a webapp that requires (benefits from) more screen estate, it doesn't always restore to the earlier size correctly. But it's something I can live with.
I have the opposite, or at least on my Macbook I have everything full screen. Every program I run I use full screen, and therefore has its own virtual desktop. It gives me a lot more screen real estate without the menu bar on the top and the dock on the bottom.
I have found it generally very uncomfortable because a vast majority of content on the web doesn't scale well vertically. You keep a wide window, you get way too long paragraphs that are hard to read as your eyes jump between the lines and whatnot. I've seen a couple excellent designs that do new columns as content comes, and there's only vertical scrolling. The columns are like 40-60 chars in width, as they should be. It's nice, accommodates the today's wide screens super well. But a majority of sites, HN included, are way wide for content if I browsed them in a 1920x1080 viewport.
HN specifically, if I have serious reading to do in the comment section, I might narrow my browser down to 600px for even better readibility.
Fair enough. I'm accustomed to reading 200 character lines, so I guess I don't notice as much. Making the window narrower to force the lines to wrap at 40 or 60 chars feels constricting/claustrophobic to me, and I'm not a big fan of the sites that display text in a thin column, about 1/3 the width of the screen.
There were a few years that I configured my work machine to have dual monitors in portrait mode. That helped my feeling of constriction, while giving shorter lines of text. As a bonus, it matched pretty closely to the 1280 width (at 1200 pixels), and often allowed more of the text to be shown at once.
> We tried BASIC, and could write some simple loops, use conditionals, and print to the screen, but never figured out how to do anything fun or useful.
Luckily my BASIC books had most of their examples starting with SCREEN 1. Drawing images programmatically happened to be fun and useful, I learned the hard way by retyping examples and then somehow began modifying those.
My own bummer was Windows epoch. I could do VB but never otherwise grasp Windows programming because any program will have so many IDE-generated boilerplate that was totally meaningless to me and I could not work with that.
I could only resume when I learned proper WinAPI later as an University course, but then again I switched to Linux, which is a best IDE there is.
I love this bit. This is extremely true even today. Most students at my university, and all of my university's classes, do not use or understand the benefits of a VCS systems. This is crazy on a different level. I hate to say this but it is in fact 2016 and it should not be a question that git or something should be used on every project no matter how small.