Hacker News new | past | comments | ask | show | jobs | submit login
Programming is Terrible (programmingisterrible.com)
170 points by benwerd on Jan 2, 2013 | hide | past | favorite | 68 comments



This guy gets it wrong in his last post on reliability (but the other posts seem reasonable).

Reliability is hard because we continue to move the goalpost. I can write a pretty reliable program to compute sin/cos tables or to sort a phonebook or to do some basic projectile trajectory. But those aren't even table stakes anymore. Today a first year undergrad student might be expected to write a program that in 1970 the most advanced programmer in the world couldn't do with a large team and millions in funding.

I hate the analogy to other fields of engineering (and this author doesn't fall down that trap, but plenty do). Bridges haven't changed in the past 10 years the way programs have (in terms of functionality or complexity).

And lastly, this is IMO, a corollary for why ageism is so persistent in our field. Experience means very little. Why? Because I can capture 70% of your experience in your code -- in your library -- and build directly on top of it. I don't actually need you. There are few other fields where as much of your experience can be captured in something so reusable (despite our constant protests that code isn't reusable -- it's more reusable than any output from my plumber).

Programming is terrible because we obsolete ourselves by moving technology out of our own grasp. The fact that newer programmers don't understand the foundation they build on is irrelevant, since few need to look behind the current. Unfortunately, I think it is simply the nature of the beast. It's why we love it and do it -- and probably why we'll all eventually stop (or at least move into management).


"The fact that newer programmers don't understand the foundation they build on is irrelevant, since few need to look behind the current."

Every time I think along these lines something comes along that pushes us closer to the metal again.

10 years ago I thought that the usefulness of my knowledge of assembly language had reached its end. After all, what use was it when I was writing enterprise applications in Java! And then our server apps started crashing (the whole JVM would core dump). So there I was, looking at a memory dump, x86 opcode table in hand as I tracked down what turned out to be a null pointer access in a native library from one of our vendors.

And then came my stint writing software for wireless credit card/debit card systems. On a Z80. In 2001.

Then came GBA development, counting every megabyte, worrying about image sizes and trying to make things look decent in a reduced palette and the eternally-too-small RAM.

And now, here I am in 2013, counting the megabytes once again, fighting against eternally-too-small RAM (especially on iPad 1), trying to keep my iOS apps small enough, using the same old smoke & mirrors and dirty tricks to give the illusion that the app is doing what it actually isn't. I'm intercepting mach kernel messages. I'm poking around memory structures. And, of course, I once again find myself disassembling code from memory dumps, opcode table in hand. In 2013.

I'm tempted to say that this will be the last wave of constrained systems, but I'd probably be wrong.


There are many classes of products that require what I like to call "fractal attention to detail." These include most consumer goods (which are usually cost and performance constrained), man-rated stuff that has to work (aerospace, medical), and products with stringent security requirements (game consoles, smart cards).

By fractal I mean that you can have good abstractions, but you can't afford black boxes, and design has to happen at a whole-system level or you're leaving something on the table. It's surprising how often decisions at very low levels affect the high level user experience.

Any time cost, reliability or security are on the line, there will be very few pieces of your product that are mysterious to you.


Why can't you have black box abstractions in things that "have to work"??

And even "has to work" can be relative, as if you can prove that something will work for a p value small enough to guarantee risk of failure is small enough in the next 100 years for a device/system that will at most stay in use for 10 it good enough... And if you're not constrained by power use or computing power, you can also use redundant subsystems: I imagine something like a satellite, receiving high radiation that consistently corrupts cache or RAM but runt 5 identical OSs on five identical subsystems and on every "high stakes" decision it does a "voting session", choosing the result of a function with the most out of 5 votes and resetting the state of the disagreeing systems with the winning ones, and a similar type of reliabilty by duplication for medical devices or aircraft systems... And going further, you can always replace an expensive reliable satellite with a swarm of cheaper ones that can fail. I think we'll choose the "swarm way" for more and more things and this will relax reliability requirements for individual components and at the whole swarm level we'll have "good enough" statistical reliability instead.

And any "swarm" can conceptually be seen as a black box, once you accept its probabilistic nature and the low but measurable likelihood of it being wrong, and maybe using a "swarm of swarms" if one swarm's p value of being true is not good enough for your application.

The "fractal attention to detail" seems needed when you need both reliability and you have serious power or computing resources constraints. And since power usage per flops is always decreasing, the number of such "dual constraint" cases should get smaller...

EDIT: typos and + pre-last paragraph to clarify what I meant


There are things you can't swarm. Security, for instance; a breach is a breach. Put the hardware you're trying to secure in the hands of the consumer -- the nominal attacker -- and this becomes Really Hard.

Swarms work great for server farms, or bespoke projects where cost is not an issue (e.g., that spacecraft, where you can afford multiple implementations of the system).

I can't see anyone throwing a swarm at a consumer product. Why would you waste the money? The pressure is to go to the edge and make it more reliable, and cheaper.


And when all 5 crash because of an identical null pointer? In a library you didn't write? That is why no black box.


Black box abstractions that allocate memory are a problem. Presently, if you want to prove you will not run out of memory, you do it by not allocating memory or using a special purpose allocator.


Yeah 'cause instead of buying a reliable japanese car, you bought an unreliable swarm of american scooters. </sarcasm>

This probabilistic stuff is academic nonsense


point taken, the "p value" way of talking is confusing when talking to people doing real work and not reading articles, you're right (I was with my mind deep in reading some medical research articles actually so my mind "stuck" on this wording...)

but it's just about: probability of scooter A and B failing at the same time is their probabilities of failure multiplied and for a "swarm" the probability of everything failing at the same time decreases exponentially with the swarm size - eg. the probability of 10 scooters with a 50% each probability of failure failing at the same time is less than 0.01% ...thinking probabilities is what you end up doing even if you do smth like devops and try to do a good job at it when stretching resources thin :)


> eg. the probability of 10 scooters with a 50% each probability of failure failing at the same time is less than 0.01%

And then you do billions of trials per second...


> Experience means very little

Perhaps. But I believe that good engineers have a number of cool failures in their past.

> Because I can capture 70% of your experience in your code -- in your library -- and build directly on top of it

You can capture /some/ of someone's experience this way. But don't confused the ability to _use_ the novel Moby Dick with the ability to _write_ Moby Dick. These are two very different things.

Let's talk about this hypothetical library for a minute.

1. It's a finished product. You didn't experience the design process. You didn't see the things that were tried and which failed. All you see is the shiny object. Could you make another like it?

2. Since you missed out on the design interaction, you're missing out on the stuff that was left out for the next version. You weren't there for the discussions about alternatives (maybe better ones).

3. If you treat the library as a black box, you're going to be at the author's mercy for bug fixes, making improvements, or doing integration in environments where it doesn't exactly fit.

4. Ultimately, abstractions are lies. The best ones are white lies, the worst ones paper-over or ignore fundamental problems. (My favorite example is putting a database on top of a Unix file system: How do you know when data is stable on disk? What /is/ a commit, anyway?) How much of that library do you really believe?

I've seen some really interesting product failures resulting from "unthinking isolation" -- perhaps a better term would be "false abstraction" -- of what was happening in the system at the hardware level. We're talking bricks and user data loss because someone thought that transactions weren't important. Performance is another area where it's hard to abstract (and some platforms, such as video game consoles, are all about performance).

Good, reliable systems should be designed with all layers in mind. That's where experience starts to matter.


To add to the "layers" aspect, the young hotshot coders of the world are also some of the worst at making marketable products, because they've become used to making an 80% prototype and then moving on to the next thing, which means that they haven't experienced this yet:

http://www.gocomics.com/calvinandhobbes/1986/11/26

Learning software "in the large" still basically requires you to drive the bigger and bigger trucks over(more features) until the bridge(your architecture) breaks. It can't be properly conveyed with any weekend project or library.


That last point rings quite true with my experiences. I've been writing PHP and some other languages off and on for about 10 years now.

Looking at current jobs listings it's difficult to find things that I am particularly well suited for, even in web programming.

Since I was using PHP well before the current crop of mature frameworks and CMS the majority of my experience is in trial and error development of my own database access patterns , anti XSS , anti SQL injection and all of that other stuff that you used to do yourself in the early 2000s. As a result I have handwritten countless numbers of CMS systems and frameworks and am comfortable taking a bunch of hardware , building a Linux server out of it and mounting it in a rack.

But nowadays the conversation is not so much about that stuff, it's "what do you know about EC2?" , "What do you know about codeigniter?" , "What do you know about Wordpress?". So, my knowledge has been abstracted (generally by people who would do a better job it than I did).

It's quite a hard thing to explain to your family when they say "But you've been doing programming for years! You must know everything about it now!"


This is the nature of modern technology work (especially software-related work.) The ability and willingness of people to continue to learn throughout their lives is going to become more important with each passing year, particularly as things get more competitive, and technology continues to evolve (and even more so if the speed of that evolution increases, as some predict.) I'd argue that said willingness is now one of the most important things for people to remain competitive, right up there with past experience. Related article and discussion from not too long ago:

https://news.ycombinator.com/item?id=4557816

It's a new era of autodidacticism.


I had to log in just to reply to this because it very well could be me writing this post. I taught myself PHP in grade 7 (around fifteen years ago, it must have been near the v3 release) and sold websites in high school. I hand built everything on web servers I compiled from source.

Then I made the terrible mistake of taking a long break from professional programming through university (I worked as a tech writer instead, the money was good). When I came back two years ago I found that I had to learn a whole new set of tools. Javascript was now the norm instead of a disabled annoyance. Browsers supported incredible display features using stylesheets which made my table-and-images based designs obsolete. All the sites were built using some sort of CMS.

I spent a year learning front end design, WordPress, Magento, Joomla, and Drupal theme design. Then I went back to my old do-it-yourself ways and wrote my own CMS [0]. Suffice to say that it wasn't necessary. I learned a lot about back end programming and MVC related design patterns while creating it, but the most valuable piece of knowledge I gained was "Don't reinvent and try to sell the wheel when other people are offering it for free." It should have been obvious, but it seems I'm much smarter in hindsight than I was in planning.

I'm still learning today. Now it's the Ruby on Rails world and a whole plethora of new technologies come into play. I'm using Haml, SCSS and CoffeeScript on top of the RoR / PostgreSQL back end. My first web app will be ready for release soon. After that it's on to learning Haskell and RethinkDB.

The irony of all this is back when I stopped programming I thought I had a pretty good grasp on everything I needed to know. I figured I'd step right back into web development and continue on the way it had been when I left it. I wouldn't have let my skills atrophy if I had known how important it was to stay up to date. I think after two years I'm finally catching up, and I don't intend to let them get rusty again.

TL;DR: Don't get cozy because you think you know everything now. Even if you do, the state of the art changes quickly.

[0] http://saintcms.com


That's one of the reasons to invest in lasting knowledge. Compiler construction is one of the evergreen topics.


  Because I can capture 70% of your experience in your code 
  -- in your library -- and build directly on top of it.
If you use my library, you have not captured my experience to write said library. If you need to write any higher level library of your own, the most problems you will face are exactly the ones I already solved once. I could write that library faster and more correctly than you. (the words 'you' and 'I' do not (necessarily) denote you and me)


> And lastly, this is IMO, a corollary for why ageism is so persistent in our field.

Out of curiosity, are you from the valley area? It has been my experience that ageism is really only that big a deal there. Everywhere else I've worked (PDX, ATL mostly) I have seen no sign of ageism. Also it fits with the valley's general culture and economic setup. Everyone I know in the valley is young and single and live with other young and single people. As soon as they get older, meet someone and think about a family they leave and go somewhere that they can get a house and situate themselves.


>Experience means very little. Why? Because I can capture 70% of your experience in your code -- in your library -- and build directly on top of it.

If you depend on my experience, it's going to make me money one way or another. And it's going to be money that you don't make, either because you have to share your revenue with me, or because it doesn't give you an edge as many others will use my library to compete with you.

But the problem is that age doesn't automatically give me the kind of experience that others depend on. There's a lot of knowledge that becomes utterly useless or even misleading over time.

And you can be quite sure of one thing. If your experience comes from gluing together a bunch of libraries to chase the fashions of the day, your experience will indeed mean very little down the road.


It's kind of funny, in a way--if you write a library correctly, documenting the behavior, run time characteristics, failure modes, and everything else, you should never be needed to help with the damned thing.

Our best work is that which can stand alone, without us, across time.


Maybe rockstars isn't the best term, legendary is better [1].

John Carmack (Doom), Dave Cutler (Windows NT), Jeff Dean (Google's competitive advantage) [2], Bill Joy (Unix, vi), Peter Norvig, Larry Wall (Perl), Steve Wozniak (Apple II), Jamie Zawinski (Netscape), and many more who I may have missed...

All of these programmers/engineers are legendary. For example, Steve Wozniak - read how he hacked a floppy disk and color into the early Apple computers [3].

[1] http://news.ycombinator.com/item?id=1017939

[2] http://research.google.com/people/jeff/ Not just Jeff, also Sanjay Ghemway: http://research.google.com/pubs/SanjayGhemawat.html

Craig Silverstein: http://www-cs-students.stanford.edu/~csilvers/

[3] http://www.foundersatwork.com/steve-wozniak.html


Programmers who can say "I don't know, but I want to." are the type that build things of worth in my books.

Beyond this, this post brings up an interesting question when examining the unpalatable extremes of the programmer personality spectrum:

Do Programming fundamentalists share ignorance and intolerance tendencies like racists or bigots?

- us vs. them

- the preference of my language/framework vs your language/framework

- there's no way anything can be true or possible if I don't understand it or hold it as my current view point

If we notice, fan boys quickly can get defensive, or worse, assault others that don't fit their way, group, or mould. It's sad, but this is done as much covertly as overtly.

The test: Bring up a tool, process, language or framework that's not in vogue right now but may be perfectly capable. Notice how some (mostly inexperienced and insecure themselves) may scoff, belittle, deride, and denigrate others.

I just think it's a shame.

We have in our hands and our minds a way to leave the world better than we found it.

Instead too many of our kin are busy needing to be told we're special rockstar ninjas who are the growth cutting and bleeding edge that are busy crushing and killing things, but continue procrastinating and searching aimlessly here.


> The only supporting evidence for the “uberhacker” was a study on batch processing vs interactive programming, in 1960. On a handful of people, in a half hour session. The rest of the noise is untamed adolescent egotism.

I present exhibit A: Richard Mathew Stallman before he got RSI.[0] The stories of those early computer pioneers should make computer users everywhere take pause and seriously think about the damage they're doing to their tendons.

[0]: In fact, if you type "uberhacker" into wiki it redirects to him.


> [0]: In fact, if you type "uberhacker" into wiki it redirects to him.

It redirects to Torvalds now.

EDIT someone must have changed it moments ago:

http://en.wikipedia.org/w/index.php?title=Uberhacker&act...


They should redirect uberhacker to Torvalds, and GNU/uberhacker to Stallman.


Agreed. Whatever you think of RMS, some respect is due anyone who can honestly say "I implemented Common Lisp once".


RMS implemented Common Lisp?


Yeah. He mentions implementing Common Lisp for Lisp machines in [1], but unfortunately doesn't go into any real detail. My guess is that it was part of the work he did to help LMI during the Lisp Machine wars.

[1] http://www.gnu.org/gnu/rms-lisp.html


Must be a random selection from a list. When I tried it I got Linus Torvalds :-) That, Wikipedia, is very cool!


Not really; someone changed the redirect moments ago:

http://en.wikipedia.org/w/index.php?title=Uberhacker&act...


Bitter much?

I've been fortunate to have never worked with bad programmers. Some were not super experienced but still they cared. They did their best. A bad programmer would be someone who didn't care about the next guy.

As for "A" players, they exist. They develop code that others can understand yet probably wouldn't haven't have come up with themselves.

Hard work does not make you an "A" player. Intelligent work does.


It's possible to care and do your best and still turn out worse code than someone who doesn't care. I've seen it.


Yep. Passion, unfortunately, doesn't equal quality.


I have worked with people who not only do not care about the next guy, but will insult and belittle others in every way. This ranged from racism to sexism to throwing his title of "senior software engineer" to order others who did not report to him. This person was almost let go for these reasons multiple times, yet toed the line in front of management. I am glad that I've only met one person who was that negatively impactful... yet I'm just 2.5 years out of school.

I feel of buzz words like "rockstar", it makes things extremely ambiguous. If I say full stack dev, you know that person is comfortable going from data stores all the way to UI. If I say guru in python, that means he is not only proficient but an able teacher of python. If I say rockstar dev or A player, what is that? If you say someone who works intelligently does that mean those that are not A players are unintelligent? In that case I would only want to hire A players... and then why not avoid the buzz word and say that?

On the other hand, I can see how addressing someone in terms of praise on their endless ability to solve problems by their great capacity for creative thinking would warrant a term like rockstar.


> A bad programmer would be someone who didn't care about the next guy.

Unfortunately I wish this was true to non-programmers. But a "successful" programmer ships without worrying about how difficult it is to add a feature or to debug a problem when they're gone.

At least that's my experience in my current job where I'm mostly maintaining shitty code. And I know it's a culture problem too.


Yup, the company hero programmer ships and meet deadlines.

While all the "bad" programmers sits there, worrying about stuff breaking in the wild, with only their 1% code-coverage tests to verify that everything is working.


"Another hope might be academia, but much of the focus is on program verification, rather than reliability. Well typed software will eliminate some failure classes, but it doesn’t give you robustness."

For anyone who's curious, academic research into verification extends far beyond "Well typed software". For example:

Klee: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs [PDF, very approachable] http://www.stanford.edu/~engler/klee-osdi-2008.pdf

That paper is a nice example of how constraint based model checking can provide code coverage well beyond what a human developer can accomplish.


And there's also proving code correct. Like the recent example of seL4.


Finally, a reasonable programming blog post.

I'm so tired of the inane myths coming out of the programming community like the recent 'liberal/conservative' thing and the 501 manifesto. It seems like every 2 months I have to stop visiting hacker news and proggit while another one of these seemingly intuitive but unsupported link-bait blog posts comes around followed by a deluge of links saying NUH-UH.

Here's the thing, if there is no research supporting your intuition, it's just bullshit. Bullshit is all fine and good until it gets repeated as fact, after that it hurts our professional integrity and enforces biases and discrimination. Fucking stop it.


Lots of research gets started because of intuition. If there is no research supporting your intuition it may simply be that nobody got around to doing any, not that your intuition is 100% certain bullshit.

So when you find your intuition is not supported by research go do some research. Don't automatically assume it is bullshit or otherwise lots of research will never get done.

And then share the outcome of that research (positive and negative) along with your intuition.


I disagree - distilled intuition is priceless.

Intuition is why I learnt most of what I know about hacking came not from university courses () but from scouring the internet for interesting reading, studying open source and RFCs, and reading blogs. Intuition is why I come to HN and and stackoverflow (or formerly to the c2 wiki and usenet) instead of reading ACM journals.

The programming landscape is vast, and there are a 1000 ways to do the same thing, but a few of them are beautiful, and I want to know why, and what guided their design, and how should I think about these things.

Research is mostly silent on this, and when there is research it's hard to tell how credible it is. Whereas if Bram Cohen writes something about networking, I listen up, because having created bittorrent is a way stronger signal of credibility than having an X citation index.

And yes, I'm deeply biased. E.g. I read Stevey's Rants because I share a lot of his beliefs, and love his writing style. So I get confirmation of my beliefs, and they may still be wrong, but at least now I have a good vocabulary* to discuss my beliefs, e.g. "liberal"/"conservative". I loved the Unix style before reading TAOUP, but now I have clearer definitions - and names - for the qualities I admired. BTW, the Jargon File is an early example of hackers seeking out a shared vocabulary of intuition.

(*) SICP was an outlier (as were some other courses my university borrowed from MIT). Written by exceptional hackers, talking about design principles explicitly, and exemplifying elegance on every corner (it wouldn't be half as great if it used CL instead of Scheme!). "Programs should be written for people to read, and only incidentally for machines to execute." - SICP didn't cite research to support this intuition, but I'm ready to repeat this one as fact. What was bullshit was an OOP course taught by a CS professor who doesn't code much and TA'd by a Java drone.


Programming is honestly a very tricky sector of work to understand how to make progress in the practice of it from here on out.

You have essentially an almost infinite number of ways to get to an endpoint with very few real constraints... this is good and bad.

We have lots of creative freedom within the constraints of code to achieve a certain piece of functionality. For example, authentication - Ive seen it done somewhat differently on every product Ive worked on not to mention the different languages and coding styles used. Most programmers enjoy the mental process of crafting their own implementation and there is usually a real reason for custom development - every product is a little bit different.

The bad... we seem to have to continually be rewriting custom implementations of very similar things which equals cost and usually mediocre reliability. We barely even have adhered to guides for something as common as authentication (at least that anyone pays attention to). This is quite vastly different than other professions. Take for example plumbing - while not every job will be done the same by every plumber, there are a set of standards that all plumbers leverage - specs on pipes, fittings, etc. Or take for example healthcare, most doctors have at most a few different treatments that are agreed upon as effective.

But its not like certain sections of the programming community haven't tried... we have open source libraries, plugins, published recommendations, etc but the problem is that the landscape is a mess not to the fault of any individual... programming languages, technologies, business requirements come and go every day effectively erasing much of the work/libraries/best practices/etc that were just starting to get established.

Its as if we need to only have one universal unchanging language and a set of universal devices we all agree on with an unbiased committee that organizes open source modules - I also live in the real world so realize this is a bad idea that will never come to fruition for a 1000 and 1 reasons but I have to agree with the author and dont have much hope in short term "real" advances in how software development gets done.


I see this as a positive, not at all a negative. Programming is rare in that the field lends itself very well to creative destruction / natural competition / evolution, for many of the reasons you describe. The process is chaotic on an individual level but the population as a whole moves forward far more intelligently than any centralized process could hope to mimic.


weren't we supposed to kick this problem over to enterprise architecture??


hm.

Correlation and causation aren't the same thing.

You can correlate good programmers with all kinds of things, but that does not mean that those things make good programmers, necessarily.

...but, you know. There's probably a pretty strong causation in some cases (eg. write code in spare time, play music, speak a second language, believe in TDD, whatever), and if you're trying to be a better programmer, looking at the things that other good programmers do is the right way to get better at it.

Some correlations are obviously stupid though (gender, politics) and I'd be vastly more interested in data mining life-analytics of good programmers for interesting useful correlations (eg. play sport? hand-eye coordination? memory? write c++ for fun? know how to write assembly?) rather than just saying, well, all correlations are probably arbitrary and don't have any relation to causation.

Pity there isn't really any way to get hold of that sort of raw data to play with~


It seems like to me the problem is not that academia isn't interested in reliability, its just that they are solving the wrong problems. I am thinking about Byzantine Fault Tolerance specifically, as from what I understand it is a hot topic of research at the moment. That seems to me to be an example of academic research that is next to useless for practitioners.

BTW that quote is from Leslie Lamport, maybe its so obvious he didn't think it needed a credit.


This blawg has much potential. Love the post about marginal expected value.


On the other hand, many if not most businesses run just fine getting only 20-40% profit. Some business even run on being able to get only 2-5% profit for each investment, but re-invest tens of times per month. Why must software project have 500% profit before it makes sense to kick-off.


Software projects need more than a 10% margin for the same reason constructing a skyscraper does. Some types of investments are inherently less predictable than other types.


> Really, we don’t know how to write and maintain software, let alone reliable software, without throwing vast amounts of time or money at the problem. when we come up short we’re left with the inevitably bug-ridden fruit of our labours.

The way to write software efficiently is to have a small team of skilled, experienced and motivated people deliver it incrementally.

The space shuttle team developed a process to deliver software that was not bug-ridden. This was costly.

And there are collections of patterns to improve robustness of systems that are discussed in books about Erlang.


I wonder if the author is simply trolling with the first post. He says:

"This myth has many forms, with many attempts to explain the magic power away in terms of some physical characteristics–

1. Programmers who have a penis are good

2. Programmers who do not have a penis are bad

If you believe in this in any way, it is highly likely that you are not only a terrible programmer, you are a terrible person too."

Then on his Twitter he proudly displays in all capitals "I AM A TERRIBLE PERSON"

https://twitter.com/tef/


There's an undercurrent of gallows humor you're missing. Recognizing and being mindful of your shortcomings is wisdom.


So true. You have to learn that programming is politics from day one. Unfortunately, unlike the sciences where there is a framework for determining who is good and who is bad, in programming it is left up to sometimes immature people to make their case. The best way to silence them is with working software.


This can be said for nearly every white collar job besides maybe trading or something similarly measurable.


With sciences you have citations, peer review, credentials, etc. What do you have when management absolutely loves their immature high school dropout rockstar because they fooled investors enough to bring in big money? I think that programming is a bit of a special case.


> With sciences you have citations, peer review, credentials, etc.

You should read this: http://en.wikipedia.org/wiki/Dan_Shechtman#Work_on_quasicrys...


I'm not saying that the sciences are not political, its just that you have to do decades of work before you can even begin to criticize your rivals. Even then, you are trying to convince smart peers in writing. In programming, the thinking on the street is that the younger, less educated, and less experienced you are, the more awesome you are.


TLDR:

Don't bother making anything - there's no chance it will be successful, no matter what. Life sucks, then you die.


hey cool I thought of tef when I saw the title of the post and it was tef!


I'm so pleased tef is blogging this stuff. It's fun to see it in one place.


Yeah, the @tef link at the bottom was completely unsurprising.


Programming is a Joy.. Ignorant cowboy-coding is suffering.) Read (learn) before you write. 80/20 rule works here.


As a programmer young in my career, it's interesting to get this type of perspective.


DNS is centralized? Not as I understand it.



IANA and the 13 root servers are the central authority of DNS


No.


Programming is just terrible for you because you don't have the awesome Emacs setup I do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: