Hacker News new | past | comments | ask | show | jobs | submit login
The Humble Programmer (c2.com)
131 points by anacleto on Aug 6, 2015 | hide | past | favorite | 64 comments



> program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.

Have seen this idea stated in various forms. In an absolute theoretical sense, the idea is quite true, but I'm not sure what wisdom or value is supposed to be imparted through expressing it - especially in the way it's phrased above.

The vast majority of programs exist to deliver net positive value in some form. In this context, the value and even the concept of absolute correctness become very fuzzy. Not all bugs are equal in terms of impact on that delivered value. Beyond a certain threshold, what even is a bug? Is it a bug when the program crashes when a user intentionally feeds it with extreme inputs, that would never be used genuinely, to try and crash it? In a mathematical sense, yes. In an engineering sense.. most likely not.

From this point of view, testing will show an absence of bugs - not categorically all bugs, but all bugs I care about.

With testing I can show that my program performs a certain set of functions as I want it to, with a certain set of inputs. That's a baseline, proved to be bug-free. If that baseline provides the minimum value the program is written to provide, then the program is by definition correct for its given purpose.

Whether it's correct in a pure, absolute, mathematical sense is a different question (perhaps even a philosophical question) that it doesn't provide me much value to answer. So where's the practical value in expressing this idea?


For me this means you have to be able to reason about your code. When you write a piece of code you need to understand what it does, how it behaves and why it behaves the way it does.

It is not uncommon to reveal bugs during code review just by looking at code when the author claims they have tested everything and it works fine. Of course they have missed some test case, but how do you make sure your quality assurance does not miss something as well?

I believe it is cheaper and faster to understand the code, side effects and interactions than to rely only on testing (which is immensely important of course).

In order to grow as a developer one has to understand there is no magic - you need to strive for understanding.


Tests work because we don't use only tests - we use tests to support and correct our logical reasoning. Even for a very simple task, if you literally didn't think about how to solve it at all and just did the simplest thing to make each test pass, you'd never accomplish it. E.g., "find the absolute value of a number" would end up with a function like

    abs(n) {
        if(n == 1) return 1;
        if(n == -1) return 1;
        if(n == -3.5) return 3.5;
        ...
    }
This is why testing is not a substitute for logical reasoning, but a complement; once we think we've solved a problem logically, we can test several cases to check our reasoning.

This is also why no amount of testing will make up for a program with no logical structure. For example, if you use unstructured GOTO as the only means of control flow, then you won't be able to solve the resulting bugginess by writing more tests.


People actually do write tested and reliable "unstructured" code, eg pure assembly. It's harder but it's not automatically impossible as is often implied.


Structure matters less than being able to reason about the set of all possible inputs and outputs.

Structure only matters in that inferring the range and domain of the inputs and output is then possibly constrained.

We could produce completely bug-free code every time but it's simply not an interesting problem if we listen to our revealed preferences about defects. We arbitrage defects in the service of other values.


> If that baseline provides the minimum value the program is written to provide, then the program is by definition correct for its given purpose.

1. Do you know what minimum value the program is written to provide?

2. Have you defined your baseline such that it models the minimum value correctly?

3. How do you know? (You can apply this to both #1 and #2)

4. Does everyone on your team agree with #1 and #2? Do all your customers agree with #1 and #2? How do you communicate it so that everyone is in the loop?

5. Even if (by some miracle) you have managed to get #1-#4 absolutely correct, have you implemented/executed your tests correctly to define the baseline? Again, how do you know?

6. It is actually feasible, in my experience, to do #1-#5 for a few iterations, but then it becomes easy to forget the original #1 and #2. How do you record these over time so that you know that you haven't broken something in the meantime. How do you manage the size so that the complexity of the description is less than the complexity of the original source code (in order to avoid errors)?

I could go on (really, I could) but I hope you get the point. There will come a time where you will see that you have tests and that you are sure that the code adheres to those tests, but there will be no way to know if those tests are sufficient. In fact, there will come to be a time where nobody is really sure what the program is supposed to do because everyone has forgotten it and the descriptions (source code, tests, design documents, requirements documents) are so voluminous that there is no way for one person to cram them all into their head.

The humble programmer knows that the system probably does not work properly (despite having tested it) and is always looking for ways to tease out the problems.


Not only the bugs you care about, also the bugs that you actually found so regression won't occur. Regression is a very common cause of production failures and testing is a very efficient guard against that particular class of bugs.

And that's exactly how you should look at this whole spectrum of issues: bugs come in classes and for every class of bug there are detection and mitigation strategies. Testing does not work as a detection strategy for all classes of bugs but it is extremely effective against some classes of bugs.

A complete strategy would involve much more than just testing.


What classes of bugs do you have in mind that testing cannot work for?


Testing works extremely well for anything that you either find in the wild or anticipate beforehand.

Hard to find using testing: performance related issues (for which you'll need a profiler), bugs that occur rarely (for instance, one bug I recently uncovered in 'hugo' was so rare it took 1000+ runs of the test software before it turned up on the machine of the lead dev, initially he did not believe my report, see https://github.com/spf13/hugo/issues/1293 ), programmer errors (this is where code review comes in), coverage issues such as routines that are simply never exercised (for that we use coverage tools), errors in the tests themselves, heisenbugs and so on.

Bugs come in many shapes and sizes and testing is a very powerful strategy but it is not the only strategy.


It sounds like you're only talking about unit testing, or perhaps automated FV. Performance, infrequent bugs, heisenbugs etc are all things we find through system test.

On some level, I'd say any bug is discoverable through testing - if a user can't experience a bug, then it doesn't exist! You could even say unexecuted code can be considered this way - if it doesn't ever matter that a program requires more RAM than it should, who cares? Of course at that point you are clearly increasing the risk of a future programmer mucking something up because of the unused code, so I don't think I'd go that far in real life.

Clearly some things are easier to find in code review / code coverage etc, but saying you can't test for bugs that occur rarely is untrue.


> On some level, I'd say any bug is discoverable through testing - if a user can't experience a bug, then it doesn't exist!

Agreed, but this is like saying "for any (relevant) bug B, I can write a test T that finds it". It doesn't mean you will actually write said test or even be aware you should test for B, even though the end user will later experience it and be harmed by it. This is what Dijkstra seems to be saying: that if your tests don't find bugs it doesn't mean your tests are complete. Important bugs almost surely will happen regardless of your tests. That they pass is a good thing, but it shouldn't instill you with disproportionate confidence that your system works as needed.


I agree much with the article; it was the parent comment I was replying to. I disagree with the idea that a tester shouldn't consider hard-to-find bugs, or performance bugs. I disagree somewhat with the idea that a programmer error or coverage issue is an entirely independent issue from other bugs - if there's no possible manifestation, it's not a bug, and if there is a manifestation, then it's possible to find under another 'class' of bug.

I fully agree that we won't execute tests to find all possible bugs. But similarly, code review doesn't find all code errors.


> I'd say any bug is discoverable through testing

... given sufficient time. If I give you a routine which takes one second to execute, has four parameters, which each have a maximum of 100 options, you simply can't test it to the point where you can declare it as being bug free anywhere in the next three decades.

Of course, most of our routines have more than 100 possible inputs - many have an effectively infinite number of inputs.


Absolutely agree - testing is all about risk management, not finding all the bugs! And the right move is often to give up trying to reproduce/find a difficult bug. WRT to inputs:

QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv.

(possible Bill Sempf)


One of these classes requires manual interaction from a user at the GUI level:

http://blog.8thlight.com/uncle-bob/2014/04/30/When-tdd-does-...


It is a way of saying that more often than not, testing will actually not cover all the bugs that you originally care about. Even when writing tests we are making assumptions about cases that "should not happen" and that actually do. And testing is not taking care of these.

It's just something to keep in mind when coding: testing as we generally do it is not a guarantee of code correctness. Other methods provide that, but testing does not. It gives a level of comfort that of all the cases that you can think of, every one of them works as expected...


> With testing I can show that my program performs a certain set of functions as I want it to, with a certain set of inputs.

No you can't. You can show that it has worked a finite number of times in tssting. With testing alone you can't show that it won't produce arbitrary crap the next time you run it.

This is obviously formally true in the general case. For example my program might contain code that makes it run exactly n times then self-destruct.

But worse than that it's often true for real non-silly programs. Race conditions are a famous, common and awful example of test-resistant bugs.


Maybe it's just me, but if you can't produce something without race conditions, you should get help with it.

Of course this turns out to be one a' them "unknown unknowns" in practice, but perhaps it should not be. Can't we teach this in school? Can't we teach it in the online media sector?


We do teach it in school. But concurrency is very hard to get right. People screw up much easier things all the time.


No you can't. You can show that it has worked a finite number of times in tssting. With testing alone you can't show that it won't produce arbitrary crap the next time you run it.

Here I think lies the line between theory and practice. As a computer scientist, Dijkstra clearly represents the former approach.

Your above description has basically been how computer programs have been built so far since the beginning for the very reason that otherwise they wouldn't have been written at all in the first place. Programs like that are nothing that even the humblest of computer scientists would ever consider complete, but in practice knowing the boundaries of incompleteness and living with that is more than useful.

Surely a program can fail in spectacular ways, but by testing a set of known behaviours with a set of reasonable inputs we can be pretty sure that if you use it as intended, with inputs that are roughly what they're supposed to be, you're most likely to have the program produce the expected output again and again. This is good enough for all practical purposes of business and utility.

It's also quite similar to how it happens in mechanical engineering. For eaxmple, an engine is designed to run at between 1000-5500 revolutions per minute, with oil of grades SAE 40 to 50, and with a suitable mixture of air and petrol. If you push the engine outside of these fixed specifications, you'll increase the likelihood of the engine failing in ways that are spectacular. The complexity of failure patterns can be overwhelming: a small, seemingly unrelated thing turns out to be vital in some underestimated sense, and failing that thing will cause all kinds of other failures which eventually destroy the engine completely. And this occasionally gets realized in real life, too. For a computer programmer, doesn't this sound familiar?

We do (try to) write critical software in a different way. Avionics, spaceship computers, medical systems. The cost per line is overwhelming but the programs still aren't bug-free. A lot of that cost goes effectively to proving the non-existence of bugs: fixing bugs that are found are cheap in comparison.

Proofs of correctness can be formulated for simple systems but it's increasingly hard for complex system. Worse yet, for most programs we use daily we're completely unable to write specifications that would be tight enough to actually make it possible to write fully correct and bug-free programs. Specifying how the program should work in the first place takes a lot of the effort that goes into special systems such as avionics. That's because specifying is kind of programming, and even if we managed to express the program specification in some non-ambiguous and logically complete format, I think the process in turn to build that specification itself would suffer from similar disconnects, vague definitions, and other humanly problems.

Goals sometimes produce the most value when they're walked towards but not necessarily ever reached.


With proofs of correctness, I think it's important to recognize that systems will need to be built in a way that is conducive to proving statements about them. This is a practical daily endeavor for anyone who uses static type systems to catch errors (since types are theorems if you squint), and a major driving idea behind functional programming research. Compositionality and purity can make it drastically easier to prove interesting theorems about programs.


The phrase "testing alone" was key. Practical good-enough correctness comes from reasoning about the code and inputs. Testing is a tool that helps with that. Testing alone is not good enough.

I teach freshmen how to code and test - we use unit tests in class - and have seen first hand that novices over-rely on tests as an excuse to not concentrate hard enough to fully understand what the code is doing.


Without factoring in the cost of errors and the cost of avoiding the errors, this debate is a pointless exercise in transcribing (i) dogma, (ii) favorite hypothetical situations and (iii) favorite anecdotal evidence. I doubt any original thought will come out of it today.


I feel like this is just the scientist/engineer debate, where the former is trying to discover objective truth, and the other just wants to build stuff.

It's an elemental disagreement, where the scientist is saying "You don't know everything!" and the engineer is replying "So what?"

But yeah, probably we're not going to learn anything with it.


I doubt it is a engineer / scientist thing. Dijkstra was way much more of an engineer-programmer than many would realize. He was an engineer first and the rest later.

I think it is about correctness of claims. There are engineers who are cognizant of the deficiencies of testing, and those that believe that testing is truly sufficient and complete.


> From this point of view, testing will show an absence of bugs - not categorically all bugs, but all bugs I care about.

You are assuming that all the possibly disastrous bugs are bugs you care about.

In reality, it's far more likely that several of the possibly disastrous bugs are bugs you cannot even think about, let alone care.


Your are (most probably) wrong.

Has your client ever reported a bug?

Was it a bug you cared about?

Why haven't you discovered it during testing?


> From this point of view, testing will show an absence of bugs - not categorically all bugs, but all bugs I care about.

I'd say not even this. Testing will show the absence of some of the bugs you thought you cared about. In practice it will fail to find bugs you thought you were testing for, and then it will fail to show bugs that, when they happen in production, will have you thinking "oh, right! I hadn't thought of that! Obviously I don't want X to happen."

This is not theoretical. I've seen both kinds of undetected bugs happen in almost every job I've had. Getting your test scenarios correctly partitioned is hard. Thinking about what to test is hard. And a lot of programmers aren't even aware of Dijkstra's assertion -- how many times have you heard one of your fellow co-workers claim "but this cannot fail! I tested it!"?


> In an absolute theoretical sense, the idea is quite true, but I'm not sure what wisdom or value is supposed to be imparted through expressing it - especially in the way it's phrased above.

Dijkstra wants you to formally prove that your code is correct. In actual practice, that works only for trivially small programs. Worse, it ensures that only trivially small programs get written. (Yes, I know, there have been longer programs proven correct. I can count on one hand the number I have heard of. And in each case, the effort to do so was very large compared to the size of the program.)

> From this point of view, testing will show an absence of bugs - not categorically all bugs, but all bugs I care about.

Not unless you test all possible inputs you care about. And for most programs, that's completely impossible.


Also, tests aren't for you, they're for the next guy who inherited your code and is trying to make a change, hoping his assumptions about how it all works are correct.

The tests are a way to provide a backstop, but also another way to document what the intentions are.


It affords nihilism in testing. This keeps budgets down.


  The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.
My very limited skull size balks when I see regex, IoC architectures, EAV data modeling, fancy ORMs ... and, dare I say, ... functional programming.


Probably not a discussion worth having if your mind is made up[1], but I've found that FP actually forces me to hold a lot less in my head, rather than more. And when I say FP, I mean specifically impure FP (Clojure is what I have the most experience with), something that gives you the mutable state escape hatch without resorting to monads or other constructs designed to hide the state from you. The rest that you mentioned I can certainly do without.

[1] That said it's worked to my extreme advantage, so I can't help but throw it out there.


You end doing those things yourself, which in the end, is even more complex.


Well, sometimes you don't even need all parts of one of those things.

If you take care not to mutate and to keep your functions pure, that's a huge first step.

I don't know much about how to make up own monads or what this type theory is all about. But that doesn't take away the gains of the stuff I know.


All those are tools that make certain jobs a lot less complex. I dare say you either use them for the wrong job or you should try expanding the size of your skull.


We demand actual proofs of program correctness, but for some reason we don't even ask for proofs that practices actually work or even that they don't do any harm. We just take it on intuition and faith, and propagate practices by peer pressure. If it looks fancy, and people like it on HN, it must be better. I don't know in what sense this is "engineering."


Some people do actually gather evidence in this area. The best collection of it I have found so far is the book Making Software: What Really Works and Why We Believe It.


Touche, my friend, touche!

But you see, said skull is constantly buffeted by the paradoxes of the following three EWD maxims:

1. "The competent programmer is fully aware of the strictly limited size of his own skull";

2. "I realized that my prior projects were just finger warm-ups. Now I have to tackle complexity itself. But it took long, before I had assembled the courage to do so."

3. "Are you quite sure that all those bells and whistles, all those wonderful facilities of your so called powerful programming languages, belong to the solution set rather than the problem set?"

:-) Take care.


EWD seemed quite sure when he recommended Haskell over Java: http://chrisdone.com/posts/dijkstra-haskell-java


To play devil's advocate:This refers to 2001, to wit, Java 1.3 - which we know was bad and slow.


I find this easy-to-overlook and mostly-off-the-main-topic passage quite deep and sadly truthful:

    There may also be political impediments. Even if we know how to educate tomorrow’s professional programmer, it is not certain that the society we are living in will allow us to do so.
    The first effect of teaching a methodology —rather than disseminating knowledge— is that of enhancing the capacities of the already capable, thus magnifying the difference in intelligence.
    In a society in which the educational system is used as an instrument for the establishment of a homogenized culture, in which the cream is prevented from rising to the top, the education of competent programmers could be politically impalatable.

...and the funny thing is that these "political" impediments have been driven by nothing else than plain old month to month business objectives: by striving for "replaceable programmers", or "reproducible performance" or "reducing the buss factor" etc. we trowed away the most promising "programmer brainpower amplifiers", prevented "the cream" from rising too high by drowning it in endless debugging and optimizations tasks that "needed to be done" because "legacy", and we got a [C, C++ and Java] main-languages-combo instead of a [some-low-level-ML, Lisp and Smalltalk] one.

But if you know that you can always fire your top programmer and still survive as a business, it was worth the cost of x-times programming productivity reduction, right? (Plus the more global thing: anyone with less money and training than you have, like some bunch of unknown 3rd world country programmers, is very unlikely to overtake your business if most of the true brainpower-amplifiers are thoroughly crippled, and you also have the "network effects" part turned against them - read below)

And the awesome part was that by doing this industry-wise, any competitive advantage that could be had by the party-line-breakers choosing the x-times tech was lost because of the network-effect-induced-handicaps: if they need to take a week to rewrite that library in their smarter language/stack and another week for the other one etc., that x-times advantage goes away pretty fast ...except for a few that were really smart and made all the right micro-decision at all the right levels (and had some pretty strong place and time advantages too - like simply being in SV around '95), like PG with Viaweb and a few others.

...it's really sad to read and write this, seeing that despite someone really smart like Dijkstra saw this a few decades earlier, it still went this way.


This statement by Dijkstra sounds a bit determinist/elitist. Differences in talent/"intelligence" is largely a myth while only accumulation of experience matters. This kind of claim is dangerous in that it allows those higher up in the ladder claim that they're "naturally gifted" to take their positions while in fact they most likely just have had more resources growing up. The current society is already inegalitarian and unmeritocratic enough and such claims could only fuel the situation further.

I totally agree with the point on homogenization of programming. However this is more likely just a ubiquitous feature of today's globalized capitalist world. I don't think it has anything to do with so-called "difference in intelligence" or whatsoever. More aspiring programmers will want to program at higher levels, and it's a good thing, as long as they don't claim this is due to them being superior intellectually than their other counterparts.


I rant about this all the time, but it is because engineering is subordinate to management. Agile, XP, whatever all get turned into a business process by which managers can "run the numbers". They are no longer tools for developing healthy software systems. They are tools to determine the minimal we can do and still make the most from the customer.


> To put it in another way: as the power of available machines grew by a factor of more than a thousand, society’s ambition to apply these machines grew in proportion, and it was the poor programmer who found his job in this exploded field of tension between ends and means.

I always thought that it was easier to be a programmer in the old days. Back when you could assess the performance of a program by counting the number of CPU cycles. All you had to learn was an assembly manual (a mere hundred pages?) and you knew everything there was to know about programming for that architecture. And there weren't that many architectures. Now we have non-deterministic CPUs, hundreds of high level programming languages, multiple operating systems, an entire jungle.


We went from basically being mathematicians ("That solution is best, and here's my proof") to engineers ("This solution works, and it works with the parts you give me, and don't ask why there's cake") to "why does this posting for a back end developer also require CSS fluency?" I grant that the last is not as concise as mathematician or engineer, but I feel like we're just about there.


Warning: this article is by Dijkstra.

You may remember him as the person after whom the unit of arrogance, the nanoDijkstra, is named.


Dutch directness has a side effect of coming across as arrogant, its merely a difference in culture[0].

For example, if you are Dutch and your friends are Dutch, you don't go on and ask "how are you?" to the people that aren't close to you.

The question "how are you?" is actually there for you to show genuine interest in someone else's life at that moment. The polar opposite of it is the US version i guess, where a "how are you?" results almost always with a "fine and you?".

On another note, i think the main reason(i might be very wrong here) that they had this clash is Alan Kay being a pioneer in object oriented programming. And Dijkstra's counter-discourses against it and sometimes those discourses were harsh[1]. (Although we can agree oop is garbage :D, that is certainly not the best way to put it.)

[0]: Although i have met some Dutch people that were being rude and trying to cover it with "Directness"

http://www.iamexpat.nl/read-and-discuss/expat-page/articles/...

[1]: http://harmful.cat-v.org/software/OO_programming/


> Although we can agree oop is garbage :D

Can we? The stuff that Alan Kay put forward in Smalltalk, with message-passing, polymorphism, late binding etc.... was definitely NOT garbage imho!

What C++ an Java and all the all the languages that tried to copy them, and also Javascript with its prototypal inheritance ended up with... it stinks indeed.

But where Scala on one side, and Go and Rust on the other are going... that will be interesting at least :)

Maybe Odersky is on to something with his whole "functional and oo are orthogonal" thing... though the way macros are implemented in Scala on top of and USING the oo core instead of separate and serving as the foundation for oo (CL style) makes me wanna puke when reading the resulting code...


Indeed, OO hatred is usually a sign of an inexperienced functional programmer.


Hatred of most any language or set thereof is often a sign of inexperience. The only exceptions I can think of are production use of most joke/esoteric languages and Oracle Forms.


Doesn't this mean that anything in production is beyond "hatred"?


Smalltalk was a great start, though, that should not have been the benchmark for implementing object oriented languages. There are some valid concerns about oo that were imo missing in mainstream languages such as Java/C++: uniform access principle (scala provides for that, but scala is a different type system) pre-conditions/postconditions as part of core language rather than annotated comments. weaker/stronger pre-conditions/postconditions in derived classes : this makes the use of methods quite obvious and what features are being inherited from the parent class. selective exports of features (this is a less known feature but can save a developer many a times). Repeated inheritance/multiple inheritance implemented right (so diamond hierarchies should be handled without having to resort to any ambiguities). Genericity (templates that understood hierarchy: this was missed in C++ at least in 2001. I havent programmed in C++ since then) Covariance (this was a direct side effect of method inheritance and I found it to be useful).

Without these features and many other features (I borrow this list from Bertrand Meyer's OOSC, 2nd edition), language compilers tend to shift the burden from the compiler to the programmer to manage types etc. It is probably too late to go back to Eiffel, though, I still think that it even today is far ahead of most oo languages, so if folks feel strongly against oo, I can understand that to some extent.

I recall a small exercise,where I was modeling Matrix using integers to do some basic addition and multiplication in Eiffel. It worked the code compiled, tested, assertions validated etc. All good. But how about determinants (I forgot the exact computation that needed that). I was still programming in java at my work and had that sinking feeling of change code at few places to make things Float and (read them as objects instead of primitives etc..you know the drill). It turned out, that the Number hierarchy in Eiffel was quite refined, I changed the declaration at one place and everything worked as expected. This was a small exercise, though it clearly outlined the power of the language.

As an aside, there was another feature that ensured that floating point numbers weren't being allocated on a heap: it was a keyword : Expanded: implying no references are created for the expanded objects. This feature alone can save a ton when decoding objects from a network etc, without me, the developer, having to worry about boxing/unboxing and the subsequent performance implications.

So, yes, OO was/is a great paradigm, though it is the details of implementation that matter. When I used to attend job interviews and was faced with one of the canned questions: which is better Java or C++, I started to take the "fifth" because I could not in all honesty compare truly bad implementations of the oo paradigm. The only reason I started to briefly look up scala was because Martin Odersky used Eiffel as one of his references : Uniform Access Principle to implementing the Scala type system. But that is as far as I could go, because, after looking at OCAML, it hit me that oo and functional can't safely mix.


OOP is anything but garbage. It's extremely productive for GUIs, gamedev, lots of things. Don't be ridiculous. It's a shame it's "hip" to pretend it's awful around here.



Probably originally attributed to Alan Kay: https://en.wikiquote.org/wiki/Talk:Edsger_W._Dijkstra

Quote: I found an amusing quote from Alan Kay on Dijkstra, from his 1997 OOPSLA keynote: 'I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras.' [audience laughter][1]. --24.184.131.16 20:06, 16 March 2008 (UTC)

https://www.youtube.com/watch?v=Xoyw8LHGtzk


I used to work at BT with a (very good) DBA who's first boss was Dijkstra


Hmm, for some reason it won't let me convert Stephen Wolfram to nanoDijkstras.


Integer overflow?


Unlikely; Wolfram Alpha is based on Mathematica, which has arbitrary-precision integers, and I don't think one wolfram can be more than about 10^12 nano-dijkstras.

(If I put "1 wolfram" into Alpha then it tells me about Stephen Wolfram. If I put "2 wolfram" into Alpha it tells me about tungsten. I guess that when you ask it for "1 whatever", it first simplifies it to "whatever", and then you can have variable quantities of tungsten but not of Stephen Wolfram.)


Wolfram is also another name for the chemical element "tungsten" https://en.wikipedia.org/wiki/Wolfram_(element) ("2 wolfram" probably means "give me the second meaning of 'wolfram'"). Mathematica/WolframAlpha is probably not smart enough to figure out unambiguously how much mass or energy could be in one nano-dijkstra, which it would need for such a conversion.

On the other hand, referring to "1 wolfram", some would argue that one wolfram could greatly exceed 10^12 nano-dijkstras... Dijkstra was actually quite smart and from reading his essays, I'd consider him quite humble, not arrogant at all.

He had that "old-noble-European cold-joking-seriousness" and most Americans are known to be incapable of understanding this tone of communication.


Don't know him a lot, but this sounds credible just from this article:

> The first effect of teaching a methodology —rather than disseminating knowledge— is that of enhancing the capacities of the already capable, thus magnifying the difference in intelligence.

Only an arrogant person would assume his success is down to his being intellectually superior than his peers instead of him enjoying a wealth of resources along the path of growing up and maybe some hard work + luck.


Is there a new c2 wiki where programmers hang out? I know we have HN, proggit (reddit/r/programming), and stackoverflow but where's the coder wiki where dilbert comics are posted every day? :P


The humble Programmer can't handle the taco.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: