Hacker News new | past | comments | ask | show | jobs | submit | more Spyro7's comments login

Am I the only one that considers all of this ranting about Nodejs to be a little bit strange?

I would have never expected a post that was obviously a troll to prompt this much of a reaction on both sides of an issue. That so many of these rants and counter rants made it to the front page of Hacker News is somewhat discouraging.

I have been playing around with node for a few months, and I have tried to stay completely out of this "conversation". With that said, I would like to contribute just a few points:

Bad programmers will be bad programmers regardless of the tools that they use. If they use node and fail to write code that is completely non-blocking, then that is what we call a teachable moment. There is no magic button, all technologies have downsides and tradeoffs.

People keep talking about how nodejs is not good for computationally intensive tasks, but v8 is not a slow environment. Am I the only one that puts computationally intensive tasks into a queue to be taken care of by a pool of seperate processes? I am only just getting into web programming, and it seemed fairly obvious to me that you would not put something like that into your main event loop.

Also, if you find that you must put something computationally intensive in your main event loop, then you should use something like node-proxy or nginx to proxy those requests to a number of "nodes".

Over and over again, I have seen that people complain that node is not a good multithreaded environment. Well, yes... That is the tradeoff of using something that is closely tied to the concept of the event loop.

If you are using node because you feel comfortable with threads and you need threads, then you are making a serious mistake. If you are a new programmer and you are using node because someone told you that it is cool, then you are making a serious mistake. If you are using node because you have a problem that can be solved or addressed with an event loop and you understand the tradeoffs inherent in this approach, then you are doing the right thing.

To use nodejs effectively will often require rethinking your approach to fit the tool that you are using.


So, I'm not a node.js guy, and nothing against it, but being a server-side guy in general, I'll paraphrase a quote that Zed Shaw paraphrased from Chinese Kung Fu novels:

"So the intermediate guy is doing all of these backflips and spinning roundhouse kicks and all that, it's very impressive, you couldn't imagine being in that good control of your body. He decides he's pretty good and spars with a master. The master barely even moves and puts the guy on his ass."

I think infatuation with event-driven i/o is one of those intermediate programmer things. We all took an advanced systems course in college, saw the literature from single-CPU days about how it makes so much sense, and more importantly saw how damn clever it was. There are situations where it makes a lot of sense (static file server), and then there are situations where you say "Wait, so you want me to run 8 instances of this thing in order to utilize my 8-core server?".


I think in this case, we have yet to see the master put anyone on their ass. We have non-masters slinging insults and ineffective demonstrations.

Don't typical Python and Ruby deployments also need 8 processes to utilize 8 cores? If I'm not mistaken, this is in fact how Heroku works.


As far as I'm aware, they do. GIL means that even using threads, they still need 8 processes to utilize 8 cores.

There are more languages than those 3, though, and if those 3 are the languages that you're considering, evented vs blocking i/o is way beside the point when it comes to performance. A super-simple thread-per-connection program with blocking i/o calls from C or Java will demolish the most sophisticated evented system you could ever design in a scripting language. That was my "barely even moves" vs "spinning roundhouse kick" comparison.


I did some benchmarks a few years ago playing around with different techniques in C#.

If you spin up a thread for each connection, and the connections are short lived, the time spent spinning up a thread will dominate the time spent communicating on the socket. Event loops perform much better than one-thread-per-connection in simple web services because they don't have the extra thread overhead. Its no accident the hello world app on nodejs.org is this sort of service.

But by far the fastest technique I found was to use a thread pool and an event loop together. When there was any work to do (new connection or new data), I scheduled the next available thread from a thread pool to do the work. This technique requires that you deal with both callback hell and thread synchronization. But, its blazing fast. Way faster than you can get with a single nodejs process.

The code is also many times more complicated, so it really depends on what you're doing. Despite knowing how to make a super high performance server, I do most of my work in nodejs. - It performs fine in most applications.


Could you take a look at SignalR? (http://www.hanselman.com/blog/AsynchronousScalableWebApplica...)

* Since this is dealing with async operations, an IIS worker thread takes an incoming request and hands it off to be processed on the CLR thread pool. At this point, the IIS worker thread immediately becomes available for processing more requests

* Threadpool threads will not be tied up by open connections and waiting for IO. Only when executing actual code/work will they be in use

* To explain why threadpool threads are not tied up: Async operations use async IO. As soon as an I/O operation begins, that calling thread is returned to the thread pool to continue other work. When I/O completes, signal is sent and another thread is picked up to finish up work.


> A super-simple thread-per-connection program with blocking i/o calls from C or Java will demolish the most sophisticated evented system you could ever design in a scripting language

Are you really sure about that? Perhaps 5 years ago, but have you tried recent versions of node (V8) or Python? I have, and even as an old-timey programmer, I'm impressed.


Summary:

Ted: "Node is cancer, because it's not the one true tool that can do everything! It may be good at IO bound code but it's not so hot at CPU bound stuff".

Node hackers: "Yes it is the one true tool!".

Me: face-palm.

OK, Threads can be used to do anything, but they are hard, while async is pretty easy (unless you want to do CPU bound stuff). async sucks for CPU bound stuff, but that's not the problem it's trying to solve (and anyone who makes a big fuss over it from either side is a fool). But none of this really matters much, until you actually need to scale.

If you are doing something simple, C or Java will be best, simply because they are faster. But not everyone uses C or Java, and it's not like speed matters that much when your App server can scale trivially, your DB is the real bottleneck, and you only have 3 users (one of whom is your cat).

If you need a lot of connections, some of which block (due to a call to a web service like BrowserID or Facebook - and there's a lot more web sites that need to be optimized for web APIs calls rather than calculating Fourier transforms), you need lots of processes (which is too heavy in Python), lots of threads (and then you need thread-safety, which is a pain, and very easy to screw up), or something async like Twisted or Tornado. Given that Tornado is already really easy to use, and basic async stuff is fairly easy to get right, the choice is easy for me. (I don't know enough about JS, Node, and V8 to really comment on Node, but I'll just assume it's roughly equivalent).

The thing is, I just don't trust threads (at least, not if I'm doing the code). There's far too many ways you can have weird bugs that won't show up without a massive testing system, or when you get a non-trivial number of users. And you can't integrate existing libraries without jumping through a lot of hoops.

Using callbacks looks like "barely even moves" to me, while multi-threaded code looks like a "spinning roundhouse kick", but then, maybe I'm just not good at multi-threaded code.

I guess the most important things is that threads need to be 100% thread-safe. Async code only has the be "async-safe" in the bits that need to be async. It looks like this:

    @async
    def do_something():
        do_something_not_async_safe()
        do_something_async(callback=finish_up)

    def finish_up():
        do_something_else_not_async_safe()
Whereas threads look like this:

    def threaded_code():
        do_something_100_percent_thread_safe()
        do_more_thread_safe_stuff()

        # fail here when you have a few users
        do_something_that_uses_an_unsafe_library()

        more_thread_safe_stuff()
If you really need to do CPU bound stuff, async sucks. You can create a second server, which handles the CPU bound stuff (and call it using a web interface which is async safe). Or there are pretty easy ways to call a subprocess asynchronously. But arguing over the merits of async programming using Fibonacci sequences as a talking point is not even wrong. Anyone who brings it up is just showing themselves to be a complete tool. That might be what Ted was trying to do (as many of the Node responses have been unbelievably lame), but it doesn't prove anything except that the internet has trolls, and plenty of idiots who still take the bait.


Please, show us even one Node developer who posted "Yes it is the one true tool!" – or even a sentiment that is remotely similar.


https://github.com/glenjamin/node-fib is close enough. Trying to explain how Node can do concurrent Fibonacci without calling another process is falling for the troll's trap - Node isn't "the one true tool", and you don't need to try to prove that it is. The trolls will just say that Fibonacci is too trivial, and you wouldn't use the same approach (co-operative multi-threading) for less trivial CPU-bound tasks.

You can use a hammer to bang in screws, but it's not usually the best way. If you are using co-operative multi-threading as a way to get Node to handle concurrent CPU-bound requests, you are Doing It Wrong (TM). In general, it's better to create a second (possibly not async) server to handle CPU intensive stuff, or fork stuff off to a subprocess (depending on the actual task). There might also be other ways - I'm not an expert.

I'm sure that if glenjamin had a less trivial CPU-bound task, he/she would handle it a better way (depending on what the task was). But to the Node haters, node-fib is just troll food. It's also an interesting example of how node.js works, but the trolls don't care.


Hm...

Bunch of people with limited social skills or sanity, in a virtual desert, having flame-wars with strawmen?

Sounds like burning man.


No, heroku doesn't work that way, heroku lets you spin up processes and those processes can be threaded. I use JRuby and get one process that can use all cores.


I think it's safe to say that typical Ruby deployments (as I qualified) are not using JRuby, especially not those on Heroku. Thus, even if the process spawns threads, they will be limited to one core at a time.

Whether JRuby's threads are actually running on multiple cores simultaneously depends on what mode Heroku is running the JVM in.



Except that event-driven io is actually the simpler solution compared to threads, isn't it?


Depends. For an application where you have significant CPU work, a simple worker pool (not coded by you), with an accept loop

while (s=accept()) { dispatchToWorkerPool(s) }

And an application-programmer method handleConnection(s) is pretty darned simple. Just do blocking I/O from your threads and rely on the machine to swap in and out appropriately.

For the next level, you can have an evented i/o layer that passes sets of buffers back and forth to your worker pool. That's more complex but still spares application programmers from having to worry about either eventing or threads, they just worry about passing back the right bytes.

Here's where things get interesting, though. It turns out someone studied this (in Java) and actually found that stupidly throwing 1k threads at the problem with blocking i/o performed better than a clever nonblocking server, due to linux 2.6's threading and context-switching improvements.

http://www.mailinator.com/tymaPaulMultithreaded.pdf

Some of that might be specific to Java's integration with linux, but think about it.. the linux guys did a pretty good job at getting the right thread to wake up and assigning it to runnable state. Moving to evented i/o might be the right move for your workflow but it also might make no difference at the cost of some additional complexity.

tl;dr context switching has gotten cheaper/better as linux has improved, 16 CPUs vs 1 CPU changes the math about context switching vs using select(), and worker pool libraries mean you shouldn't actually manage threads yourself.


That thread pool isn't actually that simple. How many threads do you use? If you throw 1,000 threads at the problem with a 2MB stack each, that's 2GB of DRAM you've thrown away (instead of 20MB * ncores per Node process) -- DRAM that could be caching filesystem data, for example, which could have a huge impact on overall performance.

With Node, the DRAM and CPU used scales with the number of cores and actual workload. With a thread pool, the DRAM used scales at least with the number of concurrent requests you want to be able to handle, which is often much larger than you can handle simultaneously (because many of them will be blocked much of the time).

Assuming you're not willing to reserve all that memory up front, the algorithm for managing the pool size also has to be able to scale quickly up and down (with low latency, that is) without thrashing.


What a load of crap... you just wasted 2gb of address space not of DRAM... You'll "waste" exactly up to the amount of stack each thread uses, rounded up to PAGE_SIZE which is usually 4kb

Let me guess, a nodejs fan?


You're right -- it's the touched pages that count. In many cases, that's a few MB per stack, which is what I said.


Alternatively, create the socket, fork N processes and have each of said processes run an accept loop. Assuming they're CPU bound, this is going to be pretty much just as efficient and you don't have to prat about with threads -or- evented I/O.

Ain't UNIX grand?


It depends, but I don't think either one is inherently simpler than the other. As a general rule of thumb, I find that asynchronous solutions are generally better for I/O bound stuff while threads are better for computationally expensive stuff.


I think this post is about ridiculing the childish behavior of the Node.js fandom. I have to agree that I've never seen anything quite like this before except the Church of St. Jobs. I always thought that we software engineers are reasonable people. But apparently a subset of us aren't.

However reluctant I maybe to join in this time-wasting back and forth. I'm just glad that someone has taken the time to point out just how inexperienced the crowd and ridiculous the whole situation is. We have enough technology in the world already. We don't need another one that makes making mistakes so much easier than before.

Lastly, I'm extremely disgusted by the misleading tag lines Ryan Dahl put on Node's front page. Programming isn't easy. It can only be easy for so long. Scaling is even harder because it actually requires deep knowledge and insight into how computers and networks work, something that I very much doubt that many of the current Node.js crowd understands. If your solution to your multiprogramming problem is to spawn more Node.js processes, maybe you've picked the wrong tech.


I've seen it from the following communities: PHP, Ruby, Perl, Python, Java, C++, C, C#, Haskell, Emacs, Vim, Common Lisp, and Scheme.

People are tribal. Some people are attracted to tribes, become attached without really knowing why, and start having the strong urge to fuck with the other tribes. Apparently this was good for the survival of the human race. Perhaps it's a bug now that we should consciously compensate for, like the desire to eat a box of doughnuts. mmm.... doughnuts... Good when you were a caveman. Bad when you sit in front of a desk for 16 hours a day and sleep for the other 8.

What's the alternative to scaling an application by running multiple copies? Scaling an application by running multiple threads? Exact same thing. The only advancement over this model is when the runtime can automatically parallelize your code, and you actually scale linearly when it does that. In the mean time, if you use a database server and not an in-memory database, guess what, you're "using queues" and "scaling by starting processes". People do it because it's easy and it works.


This whole thread/process/event queue non-sense is really ruining my appetite. Mmm fried bacon sausage wrap on a stick…

Guess what. When the packets come in from the network, they sit in an event queue. When Apache takes a request event, it takes it from a queue and hand off to a process. If you are proxying, the webapp server in the back takes the request events from a queue and hands off to a thread. When you make a DB call, the SQL goes to an event queue and the DB processes them 1 by 1. Real world web apps will always, is always and have always been done in a combination of event queues/threads and processes.

Nothing to see here. Moving on… oh look! Takoyaki!


__People are tribal. Some people are attracted to tribes, become attached without really knowing why, and start having the strong urge to fuck with the other tribes. Apparently this was good for the survival of the human race. Perhaps it's a bug now that we should consciously compensate for__

That this tribal behavior occurs among software engineering is a rather disappointing fact. Computers are pretty much the edge of technology in many aspects, technological achievements by Mankind should be the prove that we're able to use our brains in more advanced ways than basic instincts and produce great achievements like modern software.

This flood of programmers that prefers to join a tribe instead of enjoying the good things from many different 'tribes' just makes me wonder how many of us are really devoted to do something useful/positive.


  > That this tribal behavior occurs among software engineering 
  > is a rather disappointing fact. 
It occurs amongst software engineering humans. All humans get this to some degree, it's basic ingroup/outgroup psychology.

We're all humans here. It's nothing to do with "devoted to doing something positive" or "using our brains in more advanced ways." It's just the reality of being an evolved ape, and the lack or presence of this trait doesn't make anyone any better/worse than anyone else.


Yes, thank you for writing this. We are all humans, even if we don't want to be, and we have to think about our actions in the context of our genetic programming. In the case of tribalism, even though it's a strong feeling, we have to ignore it because it doesn't get us anything in programming language debates.

The best attitude to have is one of acceptance and an open mind, because the right programming tool applied to the right problem can make solving that problem orders of magnitude less difficult. You can have programmer friends even if you don't unconditionally hate the enemy. In fact, it seems, most people don't care about who you don't hate.


"I always thought that we software engineers are reasonable people. But apparently a subset of us aren't."

Unfortunately no one is immune to it, not even engineers, scientists, Wall St., etc. I try to remind myself of this and keep my baser instincts in check by periodically rereading pieces like Charlie Munger's 'On the Psychology of Human Misjudgement'[1], the list of logical fallacies [2], and anything I can find relating to the psychology and neuroscience of judgement, decision making, perception and bias. Becoming emotionally invested in anything makes it more difficult to admit you're wrong about it and change your mind in the presence of refuting data.

[1] http://duckduckgo.com/?q=munger+on+the+psychology+of+human+m...

[2] http://duckduckgo.com/?q=list+of+logical+fallacies


The whole target of his dissatisfaction is this quote on the Node home page: "Almost no function in Node directly performs I/O, so the process never blocks. Because nothing blocks, less-than-expert programmers are able to develop fast systems." That's bullshit. Evented programming isn't some magical fairy dust. If your request handler takes 500ms to run, you're not going to somehow serve more than two requests per second, node or no node. It's blocked on your request handling.

And all that stuff Apache does for you? Well, you get to have fun setting that up in front of the nodejs server. Your sysadmin will love you.

Basically if you're doing a lot of file/network IO that would normally block, node is great. You can do stuff while it's blocked, and callbacks are easier to pick up and handle than threads. But how often does that happen? Personally my Rails app spends about 10% of its time in the DB and the rest slowly generating views (and running GC, yay). AKA CPU-bound work. AKA stuff Node is just as slow at handling, with a silly deployment process to boot.


Whoa, whoa. "Just as slow at handling?" It's most likely an order of magnitude faster at handling those.

But you're right, deployment isn't a totally solved problem. Unless you just use Heroku: http://devcenter.heroku.com/articles/node-js

I mean, Node was created in 2009. I don't see anybody bragging about how easy it is to deploy yet; just that it's fast and easy to understand.


[deleted]


It's possible I'm misinterpreting the results (or the benchmark is flawed), but this seems to imply that it is often the case: http://shootout.alioth.debian.org/u32/benchmark.php?test=all...


Things will be a little different on x64 but you seem to be reading the data just fine ;-)

http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

The problem isn't really "the benchmark is flawed", the problem is that most of us seem to wish for data that tells us more than it can, magical wishful thinking on our part - Will the application I haven't designed yet be faster, and scale better, and be completed by us faster, if we write it in X or Y? :-)

http://shootout.alioth.debian.org/dont-jump-to-conclusions.p...


V8 is a lot faster than Ruby, so I'd expect page rendering to be a lot faster with Node than with Ruby (supposing both use good rendering engines).

Also it seems a lot easier to "outsource" computationally intensive tasks to other processes with Node than with Rails. With node you can stall the response until you receive a result of the computation (within limits), and use the wait time for other tasks. With Rails if you do that, you block one thread of your limited pool of threads. As an example: with my first Heroku Rails app (free hosting) I discovered the pool of threads was one.


That's a limitation of Rails, not Ruby. One could use Node.js in the same manner. Ruby has been doing async I/O in the form of EventMachine longer than Node.js has even existed.


NodeJS builds on some C async framework, so pretty much every language build on C could go the Node route - but it would be a lot of work. Personally I wish somebody would adapt it for Arc - or maybe not, so that if I ever apply to YC again I can do that for bonus points :-)

EventMachine is OK, I suppose, but I have heard that it has some warts. Are there any good web frameworks that build on EventMachine? Though I suppose the basic Sinatra-Style framework would not be that hard to build...


As of 1.3, Sinatra can do streaming using an evented webserver like Thin, which uses EventMachine. Goliath may also be worth checking out.


> Am I the only one that puts computationally intensive tasks into a queue to be taken care of by a pool of seperate processes?

Thank you. I really find it hard to believe that the reaction to Ted's post was to mess around with Fib implementations rather than to just say, "use a background queue," and move on.

If Ted has a point at all, it could be that he thinks that people using nodejs don't realize they're running in a loop. Maybe that's fair.


Ted hates queues, too.

But I don't, so instead of playing tough-guy and picking on random open source communities that don't want me, I get to write cool computer programs. Maybe we should all do the same.


What's his critique of queues?


It's here: http://teddziuba.com/2011/02/the-case-against-queues.html

TL;DR - It's not so much about queues, but about stacks, i.e. new software stacks. His proposition is that quite often you're better off with existing systems. He specifically mentions syslog, so you're logging your tasks and then the consumers monitor this log. Prevents data loss and lets you potentially restart lost tasks.

(That's his argument. I'd agree if you'd have to reimplement something like that in the pre-built *MQ solutions, but I don't know enough about all of them, maybe that – and more – is already in there.)


Interesting. I actually agree with his points (they are consistent with lessons I've learned overusing queues in the past).

I still use queues today, only more judiciously and with the issue of failure states a very well defined part of the design.


He seems to argue against having a blocking worker wait for the queue, which is exactly what you wouldn't do with Node.js


So I've worked on on an older open source project that does exactly what you suggest. The main loop accepts socket requests using non-blocking IO, figures out how to handle that request, and then forks a child process to handle it appropriately.

This can work great and can scale very nicely, but it also has some major drawbacks compared to other solutions. The whole point of the original post was that single-threaded non-blocking event loops are very fragile. You can easily end up accidentally freezing the entire application.

Other problems with forking event loops include:

- Simple Event loops require you to manually CPS-transform your code. This isn't an issue for a simple request-response cycle, but it quickly turns into spaghetti code for anything more complex. For a college class I once wrote a toy non-blocking P2P client in Python using entirely non-blocking IO, while all the other students chose to use threads. My client was very performant, but the code ended up being FAR more complex than everyone else's solutions for the simple reason that I had to code in CPS the complex multi-step interactions that occur between peers.

- If any of your tasks are not independent and must interact with each other, you have to start dealing with IPC, which can be very difficult to get right without a good message passing library. Even with one, it can still be more complicated than using threads. Clojure's STM, for instance, is useless without a shared memory model.

- Your application is not portable to Windows if you rely on forking.

And these are just some of the problems with using a simple non-blocking event loop to drive your entire application. Throw in the fact that you are using javascript, a very unscalable language never designed to do anything more than provide interactivity in the browser and that has incredibly sparse library support, and you've got lots more problems on top of that. Node.js just isn't as useful as its very enthusiastic proponents claim it to be.


This post from Glenn Vanderburg a few years back is incredibly relevant:

http://www.vanderburg.org/blog/Software/Development/sharp_an...

Especially this excerpt:

"Weak developers will move heaven and earth to do the wrong thing. You can’t limit the damage they do by locking up the sharp tools. They’ll just swing the blunt tools harder."


This is a good presentation. It is a good demonstration that there is no magic bullet for database scalability problems.

I love NoSQL as much as the next person, but turning straight to NoSQL when you are faced with scalability problems in a more conventional relational database is always going to be a mistake. Before you translate everything over to a NoSQL db, try dropping the ORM (or find another one), looking at your table structures, or tuning your indexes. If you do this, then there is a strong possibility that you will save yourself some time, energy, and effort.

It pays to think deeply about your issue. NoSQL should be another tool in the toolkit, and not the hammer to be used to drive all of your database issues into the wall.

When you turn to a NoSQL solution, you should not be doing so with a mind to find a "magic bullet". You should be doing so because you have thought deeply about the problem and have found that a NoSQL solution answers a specific need.

If everyone could use NoSQL when it is appropriate to do so, then there will be less "horror stories" and more illustrations of valid use cases than what we have today.


try dropping the ORM (or find another one)

Dropping or replacing the ORM is a big undertaking in anything but small toy projects.

Better advice is to run a profiler and crank up the ORM's verbosity in order to determine the extent of overhead imposed by the ORM, and something like pg_stat_statement and EXPLAIN ANALYZE (in the case of Postgres) to find slow statements and see why they are slow. This will give you a much better idea of where time is spent, to what extent things can be optimized, and whether the ORM is to blame for any performance issues.


I started with Drupal around the time version 4.7 came out, and I always thought that the project would have done better for itself by focusing more on its potential as a framework rather than a CMS.

Drupal has always seemed to have something of an identity crises. Some people think of it as a fully-featured CMS, some think of it as a framework or platform, and then you have a ton of people just trying to use it as a more flexible blog.

I think that when the community exploded in size, it had only really begun to wrestle with the identity question. The lack of some definite focus as Drupal's popularity expanded probably contributed to the current situation.

Frankly, I think that the majority of the current problems would be solved by trimming all of the fat from the core. That would allow the core developers to focus on making the core fast, stable, and flexible. Let the wider community maintain and provide any extra functionality.


>> Drupal has always seemed to have something of an identity crises. Some people think of it as a fully-featured CMS, some think of it as a framework or platform, and then you have a ton of people just trying to use it as a more flexible blog.

This has been my take on it, too. (I've been developing Drupal sites for clients almost exclusively for the past 4 years.) The confusion persists because Dries Buytaert believes it's possible to be all things at once. In his keynote at DrupalCon SF last year, he brought up the question of whether to focus on becoming an easy-to-use CMS for people who don't want to code (competing with WordPress) or being an awesome platform for developers, and answered it by saying "both". (My apologies for paraphrasing.)

Without any clear direction on this, the core definitely becomes vulnerable to bloat as there's no clear model with which to approve and reject features.

That said, is Drupal truly in a crisis, in the sense that the project will die soon if the core maintainers don't act immediately? I'm not so sure. I don't feel like the slow uptake of Drupal 7 has as much to do with bloat and half-baked features (of which there are a few, like Dashboard, but not an overwhelming amount.) Rather, the problem is with a few important contributed modules still not being ready for prime time. In particular, modules for nodereferences and breadcrumb management are still in flux months after Drupal 7's release.

In time, those will catch up too and Drupal 7 will likely have a few if not several years of good life in it. That's a long time in the life of a Web site, and hopefully long enough for Dries to realize that he needs to step up and make some tough decisions about which direction he's going to steer the community toward.


>> Frankly, I think that the majority of the current problems would be solved by trimming all of the fat from the core.

The challenge, of course, is that everyone has a different definition of "Fat" versus "Muscle." The developers who actively maintain the codebase are a tiny, tiny percentage of the overall community that uses it. Many of the features that are frustrating and annoying to maintain are popular among the non-developers who use Drupal for site building.

Balancing the usage needs of those people with the maintenance woes od the developers is one of the big challenges.

The other is dealing with the identity crisis you describe; the lack of willingness to focus on a manageably small set of use cases or target audiences became a real problem once Drupal's growth curve started swinging up around 2007 or 2008. Now, simply declaring that things are "more focused" will orphan large numbers of users who came on board and started doing their own thing with Drupal while it was less focused.

It'sa conundrum.


"Frankly, I think that the majority of the current problems would be solved by trimming all of the fat from the core."

There are 100's of projects who proclaim to do so, and all of them remain obscure. By having a core that does, well, nothing (no blog, forum, commenting, news workflow, permissions, ... systems), as a developer you sill have to go hunt for 'modules' of which you usually have no idea how well they work and if they'll be maintained next year.

In theory the 'plugin' system is nice, in practice it turns into the situation where the 'core' rolls on like a freight train, leaving 'plugins' by the wayside, and users scrambling to patch together the known working versions and plugging the holes left by the incompatible versions. In the end, you still have to do a bunch of programming - but oh, before you can, you need to spend weeks to learn all the ins and outs of how to do so.

The answer is in slowing down development speed. Yeah it sucks because you can't work on cool new stuff, but maintaining backward compatibility and having a large base of usable modules, that are actually vetted by a larger group than just the one or two people who work in that module and only use the bleeding edge version themselves anyway is the only way to build a platform that people can rely on being there new year, and the year after.


I agree and I believe if they simply picked a direction, be it a CMS or Framework, it would indeed trim out a lot of the fat from the core like you mentioned.

The fact they're trying to be all things to all people is what will likely cause the project to fail. Which is unfortunate, since I do like using it.


Since this is Hacker News, and there are a lot of really smart, numbers-oriented people here, I just thought I'd provide a link to a dataset that looks at income distributions within the US from 1913 to 2008:

http://www.econ.berkeley.edu/~saez/

Under "Income and Wealth Inequality" on that page, click the link titled "(Longer updated version published in A.B. Atkinson and T. Piketty eds., Oxford University Press, 2007)".

It is an excel file that contains a metric ton of tables, charts, and figures. If you have a few moments and want a deeper understanding of the issue, that excel file is the best place to go.

(Edit: I just wanted to note one thing. My favorite graph from the excel file is figure 1B, where you can see that we are basically returning to where we were prior to World War II. It makes you wonder what was so different about the post-war period that resulted in this shift. Personally, I wonder if the sudden subsidization of education for returning veterans had anything to do with it, but I have no data on this right now, so...)

As far as my own thoughts on the issue go, I agree with pg. For a boring, academic take on it, there is a wonderfully dry 1980s paper by Rosen outlining the superstar theory:

http://www.ppge.ufrgs.br/giacomo/arquivos/ecop72/rosen-1981....

Basically, thanks to technology and globalization the rewards to individuals that have the highest levels of abilities have been magnified. Their "reach" has increased relative to those below them, and their rewards have increased in line with this.

While interest in this seems to be popping up all over the place due to the current economic catastrophe in progress, the idea that communications infrastructure improvements lead to income inequality is an old one.

Here is Alfred Marshall (founder of neoclassical economics) in 1890 (revision from 1920):

http://www.econlib.org/library/Marshall/marP54.html#VI.XII.4...

As for myself, I have no idea about whether this is a good thing or a bad thing (or whether anything could/should be done about the growing inequality). However, it is happening now, on a grand scale, so I think that it is important for everyone to have some context on the issue.


Why is this being upvoted? This article begins with a questionable premise and then descends straight into misinformation.

The problems in the USPS did not begin with email. They began with a highly questionable requirement that the USPS fund a plan to fully cover the estimated future health care costs of all current employees. They are the only government institution required to do this, and it has crippled their ability to remain profitable:

* http://www.plansponsor.com/Post_Office_Says_PreFunding_Retir...

* Read First Few Pages Here -> http://www.uspsoig.gov/foia_files/RARC-WP-10-001.pdf

* http://www.nytimes.com/2011/06/23/us/23postal.html

They have overpaid this fund by billions of dollars, but they are not able to use this money to address their current financial shortcomings.

Somehow the author of this piece is able to extrapolate from the fiscal problems of the USPS to the overall job market. The extrapolation is misguided at best.

There are so many things wrong in this article that I will not take the time to address them all. (Most of Europe was thriving in the Middle Ages? Really? The author needs to define the word thrive.) I just want to say that I find it interesting that there is so much hand-wringing in these comments about jobs being displaced by technology.

In economics, we like to call this creative destruction. Old jobs go away and new jobs take their place. This is a natural process, and their is nothing so magically different about the technological revolution that it will somehow "make jobs obsolete".

Just as The Luddites protested against the loss of jobs brought on by the technological progress of the Industrial Revolution, now some individuals protest against the loss of jobs brought on by the technological progress of the "Technological Revolution". Then, as now, it was all hand-wringing and nail-biting with no serious economic analysis.

The critics say this time is different, this time there will be no new jobs, and we should urge people to find something other to do than working. The critics are wrong. Don't worry people. Employment is here to stay.

Side Note: The author engages in some navel gazing when he says America has all that it needs. I'm not sure if the author has noticed it or not, but there is a such thing as globalization and the global needs for goods and services will increase as developing countries close the ground with developed nations.

This global recession is just another business cycle, eventually the world will have another upswing, and then we will revert to mean again. There is no magic here, just the march of time. I would not put too much stock into those who believe that a single recession merits the reevaluation of the entire modern economic system.

Edit: Trying to trim the size. Eventually, I will learn the art of not making posts into walls of text.


> The critics say this time is different, this time there will be no new jobs, and we should urge people to find something other to do than working. The critics are wrong. Don't worry people. Employment is here to stay.

I mean this is hand-waving as well. You havent actually provided any substantive counter-argument other than bring up the Luddites.

I think the basic argument that at some point, possibly already past, the productive activity required for basic human survival will be virtually entirely automated and require 0 human labor input. How do we allocate productive output in a society that requires no labor input? The notion of the "job" as described in the article may very well be obsolete.


I personally think the much bigger question is how do we allocate the profits.

As it could rapidly go very wrong with a miniscule few controlling the vast majority of the income. And a massive income gap developing.

Kinda like what's already happening...


That was a risk in the early 80s. It's pretty much an undisputed reality today.


There is this new economic system being proposed to address all these realities of the future with the advancement of automation: http://p2pfoundation.net/Panoply


> The problems in the USPS did not begin with email. They began with a highly questionable requirement that the USPS fund a plan to fully cover the estimated future health care costs of all current employees. They are the only government institution required to do this, and it has crippled their ability to remain profitable:

"fully cover estimated future costs" means that they're the only govt institution that isn't a timebomb of future obligations. When they can't keep up, we know that there's trouble down the line, which is far better than the status quo, which is to wait until all of the assets are gone, leaving nothing but obligations.

In other words

> They began with a highly questionable requirement

this isn't a questionable requirement, it's a sane one. The insanity is that it's unique instead of being universal.

> They have overpaid this fund by billions of dollars, but they are not able to use this money to address their current financial shortcomings.

Yes, current projections say that they're overpaid, but these projections have a way of going horribly wrong.


> "means that they're the only govt institution that isn't a timebomb of future obligations"

This was the spirit in which the requirement was enacted. I completely agree with the idea. My problem with this line of reasoning is this:

Why fund the estimated lifetime costs of health care that will not be provided for maybe 40 or so years into the future? Why not fund a moving window of ten or twenty years of obligations instead?

There is a huge opportunity cost associated with stashing such a large amount of money away for this purpose rather than using it for current operating expenses.

> "Yes, current projections say that they're overpaid, but these projections have a way of going horribly wrong."

Indeed, projecting costs for the next half-century is a dangerous thing to do. This is why I question the rationality of handicapping the USPS with the responsibility for funding far-future liabilities based on those projections.

Even if you ignore the opportunity costs of this fund, the inflexibility of the rule as well as the fact that it puts so much faith in projections makes it a highly suspect decision.


> This is why I question the rationality of handicapping the USPS with the responsibility for funding far-future liabilities based on those projections.

You're ignoring the fact that those liabilities are being incurred today. If you can't pay them with current revenues, how are you going to pay them with future revenues that also have to cover the liabilities that you're incurring when you're receiving said future revenues.

I'm sympathetic to the idea that there's no way to properly estimate the NPV of this sort of open-ended liability, but that's an argument for not incurring such a liability. It's not an argument for paying less than your best guess of said NPV.

> Even if you ignore the opportunity costs of this fund,

There is no "opportunity cost" in not paying NPV now.

Suppose that your landlord offered to let you pay each month's rent starting 1 year from now and over the 10 succeeding years. How much should you set aside? The only answer that keeps you solvent is to set aside the NPV of the payment stream for each month's rent payments each and every month.

Do the arithmetic. After each month, you've got a new liability that you're going to owe after you stop receiving any corresponding benefit. At the end of year N+1, you're making payments on N years worth of rent (capped at 10) and those payments will continue for 10 years after you stop renting. (Yes, they'll decrease over time, but since you won't be getting the benefit of whatever you were renting, it's unclear why you're happy paying for it with "new money" that you probably need for replacement digs.)


> Why fund the estimated lifetime costs of health care that will not be provided for maybe 40 or so years into the future? Why not fund a moving window of ten or twenty years of obligations instead?

Because it has incurred those obligations and nothing that happens in the next 10-20 years will make those obligations go away. However, 10-20 years from now, it won't have today's revenue to cover those obligations. In addition, ten years from now it will be incurring additional obligations.

Those obligations have a net present value. If you don't fund them now, you have to fund the remainder later out of future revenues while you're also trying to fund at least part of the obligations that you're incurring then.

Let's assume steady state. If you always fund the net present value of 10-20 years of the obligatations that you incurred in the past and present, you'll eventually end up funding the equivalent of the obligations that you're incurring, but you're behind by the amount that you didn't fund during the ramp. You have to make that up in addition to the equivalent of just funding NPV of what you're incurring.

Do the arithmetic over time - funding the total net present value of future obligations is only sustainable way to handle said obligations.

A pension promise is not like a mortgage - it's just a debt with no collateral; there's no equity to sell if you can't make the payments.


The General's office's report said that the projections would have been accurate but for the USPS earning a higher than expected amount of interest on their assets. To recalibrate based on this windfall seems foolish to me.

If you take the view as I do that an institution ought to make choices that will keep it around for hundreds of years, spending money conservatively enough to afford promises made to one's employees becomes a wise choice instead of a suspect decision.

That said, I do agree with the General that the USPS is unnecessarily entangled with the US government, and would be able to adequately fund its defined-benefit obligations without legislation (as private companies do), and also regulate overpayments and underpayments better without relying on the GAO or Congress.


Did you read past the introduction? It's being upvoted because it's provocative, it's by Rushkoff, and there is a certain unease about the future that this piece speaks to in a hopeful tone.

It's not a protest against technological change at all. It's someone trying to alert others to possibilities that technological change might have for the human condition, if some lateral movement could be accomplished.

Technological progress, creative destruction and the business cycle are all fine and well, but the transition states (Great Depression, WWI & II, Third World Debt Crisis--take you pick) are not particularly kind to all concerned. There's no rational reason to expect the future to be any kinder, breezy appeals to conventional wisdom notwithstanding.


I think that the title was just to grab your attention. I don't agree with everything in the article, especially the idealization of the Middle (or 'Dark') Ages, but the idea that what kind of work people will do 30-50 years from now, I think will be as great of a shift as from the farm to the factory, or the factory to the office (though probably not as big of a shift as the agricultural revolution!)

I think that the author is making a case that more people will work for themselves since the Renaissance, and perhaps from home since the Industrial Revolution began.

In Rushkoff's defense, he's distinguishing between work and jobs.


Rushkoff writes: "What we lack is not employment, but a way of fairly distributing the bounty we have generated through our technologies, and a way of creating meaning in a world that has already produced far too much stuff."

To wit, the discovered it can vote itself largess out of the public treasury. With the lofty assumption the idle will, lacking want, pursue productive meaningful occupation, he fails to observe London-like riots are the more likely outcome.


I think that his argument is flawed, but there's some merit to the idea that what a job means will, should and has shifted.


>he fails to observe London-like riots are the more likely outcome

You're suggesting that the London riots, where income distribution and control of wealth and political process is nearly as skewed as in the US, with all the attendant social ills, where civil rights are an even bigger joke than in America, is the result of social democracy run amok? Do you know what social democracy is?

I don't disagree that many people will not pursue productive work if they're free from want. In fact I suspect most won't. But I fail to see why that makes a difference.

I think the core problem is whether technology will make most people obsolete and, if that happens, what we should do with all the people. What do you do with people who really will not and can not ever contribute back to society what they take from it? We had less of a problem with letting them starve back when there really was work for everyone to do, but that might not be the case for much longer, if indeed it is now.


"I don't disagree that many people will not pursue productive work if they're free from want. In fact I suspect most won't. But I fail to see why that makes a difference."

The problem is that most of the idle people will have an unsustainable amount of children.

I've sometimes thought that only people demonstrably able to support their children should be helped to do so.

I'm not extremist enough to suggest outright prohibition, China-style, but some kind of discouragement and maybe social stigma, plus education (would it be "indoctrination"?).


"The problem is that most of the idle people will have an unsustainable amount of children."

Madness.


Could you elaborate? We have a welfare system in Uruguay, and it is an observable fact that people on welfare have more children than people not on welfare.

Whether that is caused by welfare, idleness or other factors should be subject to studies, but that's what I'm talking about.


Just saw this now. Poor people, I think have more children, with or without welfare -- it's not because of welfare that they do -- how much welfare in Africa?


>>I think that the title was just to grab your attention.

Unsupported inaccurate inflammatory content titles are called link bait/spam.


Well, like The New York Post. What do you expect from CNN!?!


In defense of the article's author, it may sound like blasphemy but he brings up some very good points about job creation.

The problems with the USPS might not have begun with email, but email will probably be what kills it (and the fax machine is doing a pretty good job as well).

Automation is a reality we are going to have to face. At some point, it is going to kill almost every manual labor job in the world, probably within our lifetime (IMHO). What do we do with "unskilled" labor then? As I see it, I think living will probably be more or less "free" in the future. When automation is so great that it can rely only on the earth's basic resources and natural energies, food, utilities, and basic housing will probably be affordable enough for anyone, if not outright free.


> Does anyone here not remember the Industrial Revolution? The creation of the automobile? The switch to trains for long-distance travel? The usage of water or wind to turn mills to make bread? Pick something! This is the progress of civilization.

Civilisation is not axiomatically good. This is one of my favourite critiques of civilisation: http://ranprieur.com/essays/beyondciv.html


If you enjoyed reading that, then perhaps you would enjoy reading a book that has come to occupy a treasured space on my shelf:

http://www.amazon.com/Science-Technology-World-History-Intro...

The authors provide an overview of world history through the lens of technological advancement. The chapter on physics is a bit rough, but other than that I think that the book is quite solid. It should be required reading for anyone that wants to have a good overview of the march of human civilization.

Edit: By the way, I did not downvote you, but the likely reason for the downvote is that the individual that you link to does not seem to have the necessary qualifications to provide a strong critique of civilization.

Remember, just because someone writes about a topic does not mean that they are an authority on this topic.


I disagree.

Your belief that "new jobs take their place" will only remains true so long as increases in demand can keep pace with productivity: Sure, there's more demand out there in the developing world, but demand is finite in a way that productivity increases are not.

If we're not turning that particular corner today, we will someday; and from that point forth, there will be an inexorable decline in employment.

What happens then?


I think there is some inductive logic in here; assuming the future will look like the past simply because previous futures have looked like previous pasts is not entirely reliable.

It's safe to assume that things will generally regress to their mean, but they often don't - stock market movements are way more volatile than they should be, for example.

It's possible that technology will advance far enough to perform all of the mundane but necessary work, leaving us to do something else with our time.

I'm not sure when that will happen or how it will be received, but it would be cool to not have to do things you didn't love just to stay alive.


So far as I know, the Google Car has not displaced any taxi drivers yet, though the author of the post puts it in the same sentence as EZ Pass.


No mercy. Let em go out of business. It is a corrupt monopoly. Try to start a competing taxi business. (or tow business) and you'll see you do NOT live in a free market. Again. No mercy. After decades they'll be getting theirs.. (yes I tried once upon a time)


What happened?


They are the only government institution required to do this, and it has crippled their ability to remain profitable

The post office may be the only government institution required to do this, but every private institution is required to do it.

http://thomas.loc.gov/cgi-bin/bdquery/z?d109:HR00004:@@@L...;

Since the post office is supposed to be more or less independent of the government (i.e., profitable all by itself, rather than govt. subsidized), it's quite reasonable to demand that it be subject to the same pension regulations as the private sector.


The problem the USPS is having is not about prefunding pensions and pension plans. Their problem is that they are required to fund future employee health care benefits. No other company in America - public or private - is required to do this.

This is a side note, but... By the way, I am actually a frequent reader of your blog, and I really miss your posts on economics. Link for the curious:

http://crazybear.posterous.com/

Good stuff!


After googling this, you appear to be correct. However, the problem is not that the USPS is required to do this - the problem is that everyone else is not.

There is absolutely no reason to treat pension and non-pension obligations differently, and the fact that we treat the USPS differently than everyone else is not an argument to let the USPS off the hook. Instead, we should force both the public and private sector to properly fun their post employment obligations.

As for the blog, I'm building a startup, and haven't had time to post. I'm planning a post on color measurement soon, however.


Let em go out of business. Back when I was jobless they never would have gave me a second look even though I could have ( and still can) streamline their business..

No mercy for stupidity, inefficiency and incompetence..


And this is why more web developers need to be aware of and use things like jQuery Mobile:

http://jquerymobile.com/

If you look at the platforms list on that page, you will see that you can actually use this framework to build a mobile site that looks and works decently on an extremely wide variety of devices.

One of the best things about jQuery Mobile is that it is extensively documented. Just go to the docs and demos and view the source for any page that impresses you. You will instantly be able to see the markup that you need to duplicate that pages functionality.

(Tip: Before viewing the source, make sure that you remove everything to the left of the # in the url, including the # itself. Just check out the docs, you'll know what I am referring to.)

Using a good mobile framework, it is trivial to provide a mobile version of sites that already exist on the Internet, so why not do it and gain access to a an additional audience?


When mobile web developers see what's possible with native apps and how successful they are, what do they do? They try to clone them in typical "webapp-uber-alles" fashion, instead of building quality mobile websites. And that's why I have to look at dumbed down iOS-clone UIs while trying to browse a website from my phone.

By the way, I have just looked at jquerymobile.com from an iPhone 4 and the UX is unresponsive, showing the checkerboard pattern, flickering and so on. Not something that I'd want more web developers to be aware of. I would want more developers to be aware of web standards and good design, not JavaScript hacks.


The techniques in mobile web development are still young. A general knowledge of how to build a site for mobile interfaces has not come around yet.

For now, it is understandable that you will see some imitation of successful "app" design patterns, but things can change quite easily. If it turns out that the iOS look is not good for the mobile web experience then things will change.

"By the way, I have just looked at jquerymobile.com from an iPhone 4 and the UX is unresponsive, showing the checkerboard pattern, flickering and so on."

The project is still in beta, and there are still some rough edges. It's an open source project, so the developers and users would greatly appreciate it if you could file a bug report about this issue:

https://github.com/jquery/jquery-mobile/issues

I did a quick check and I didn't see that your issue has already been reported, so your contribution would be doubly helpful.

"I would want more developers to be aware of web standards and good design, not JavaScript hacks."

I understand your perspective (all developers should definitely know web standards and good design), but, surely, you would agree that there is value in trying to do something a little different? Today's javascript hack may well turn out to be tomorrow's revolution.


I've spent some time with jQuery mobile and the performance is just too poor for it to be usable. What's crazy is that regular jquerymobile.com is more responsive than the 'Docs and Demos' section that is built using jQuery mobile on my phone (Nexus S).

I've seen the same sluggishness in jQuery Mobile apps I've tried building on the iPhone4, Palm Pre and a couple of Android devices. It's too bad really.


In it's current state of affairs jQuery mobile is too slow to be a universal solution for mobile.

That and it makes everything look like an iPhone app and there are enough people out there who dislikes that particular sense of aesthetics.

I honestly don't see jQuery mobile having the same "thing" as plain jQuery had which made it successfull: It stayed out of your way and just worked, without making any assumptions about what you liked and what you wanted beyond that.

Any word on how well knockoutjs or other options is working on mobile?


> I honestly don't see jQuery mobile having the same "thing" as plain jQuery had which made it successfull: It stayed out of your way and just worked, without making any assumptions about what you liked and what you wanted beyond that.

Jquery mobile is following the groundwork laid by Jquery-UI, not Jquery-Plain.


I have just completed an R&D project for my company on using JQuery Mobile, with knockout.js, so perhaps I am qualified to answer this.

As another commenter pointed out JQuery Mobile (JQM) is following on from JQuery UI, not from straight JQuery. Knockout.js is a databinding and templating system. As such JQM and knockout.js are complementary instead of competing. Our system used knockout for databinding into the JQM UI.

My recommendation coming out of the R&D project was to use knockout for production apps and to continue to watch JQM, but not use it for production just yet. JQM is still evolving rapidly - there were a number of major changes made during our R&D project. We also ran across a number of issues making JQM work with knockout. For example JQM didn't like knockout changing some of the HTML dynamically, after JQM had been initialised. The JQM team are aware of the problems dealing with changing HTML via javascript and we saw a number of fixes committed during the course of our R&D project. One day soon JQM will be very good, but its not quite there yet.

Furthermore not every mobile application will benefit from JQM. Unless you are building an essentially forms and data driven application, it may not be very useful. Depending on your needs you will be able to get very far by using knockout to produce clean semantic markup, then just applying CSS over the top.


Dr. Taleb's book, Black Swan, was a truly interesting treatment of long-tail statistical events and how humans perceive said events. It is unfortunate to see him indulging in so much hand waving in this article.

"For banks that have filings with the US Securities and Exchange Commission, the sum stands at an astounding $2.2 trillion...... is directly transferred from the American economy to the personal accounts of bank executives and employees"

I don't understand where this number is coming from. According to the data from the 2007 Economic Census[1], the U.S. commercial banking sector employed (approx) 1.6 million employes and compensated them with a total payroll of (approx) 95.8 billion dollars. Total payroll for the finance and industry industries is $502 billion for 6.2 million employees.

What is this $2.2 trillion that he is referring to? In his opening statement, he makes it sound as if this amount is direct compensation, but the actual numbers for the industry make that figure seem unlikely. (I am not an accountant, but I seriously doubt that some type of compensation that doesn't appear on the annual payroll is going to make up the bulk of that difference.)

"That $5 trillion dollars is not money invested in building roads, schools, and other long-term projects"

First of all, this $5 Trillion dollar number is a projection, and projections need to be analyzed critically[2]. Second of all, this is wrong (as pointed out by yummyfajitas elsewhere in this thread). So long as money is not being (quite literally) stuffed into a mattress, it is doing something in the economy. Now whether that something offers benefits over opportunity cost may be a product of market failings and questionable regulations, but that can hardly be blamed on the banking industry.

"It is (now) no secret that they have operated so far as large sophisticated compensation schemes, masking probabilities of low-risk, high-impact \“Black Swan\” events and benefiting from the free backstop of implicit public guarantees."

I think that it is disingenuous to paint the entire industry as being composed of individuals out to cheat the system. I think that it makes more sense to note the behavioral economics at work.

A disturbing number of financial institutions were making money from home mortgage loans. Any financial industry not doing so would be left out of the crowd. Regulations allowed for massive leveraged financial positions to be taken in the market. Financial companies became more and more aggressive (after all, the models say everything is fine, and if we don't play along shareholders will question our financial performance). The bottom fell out and everyone got clubbed (but some worse than others).

There is no great, malevolent force working in finance. The industry was governed by (and possibly still is) some pretty lackluster regulations, but it is a questionable premise to paint the entire industry as willfully fraudulent.

"it also has provided secret loans of $1.2 trillion to banks."

Activity by the Fed is not "secret" but rather it is a matter of public recored: http://www.federalreserve.gov/

(Or, more specifically: http://www.federalreserve.gov/releases/h41/current/h41.htm )

"We don’t believe that regulation is a panacea for this state of affairs. The largest, most sophisticated banks have become expert at remaining one step ahead of regulators"

This is not really the case. It is more the case that the regulators are often underpaid (relative to what they could make in the private banking industry) and over-worked (due to staffing cuts). This creates an accelerated revolving door between the public and private spheres. This revolving door makes it very hard to get any good regulations out of our current institutions.

"A well-functioning market would produce outcomes that favor banks with the right exposures, the right compensation schemes, the right risk-sharing, and therefore the right corporate governance."

A well-functioning market would favor the banks that offer the best risk-return ratios. Now whether or not the measure of risk is correct will only be obvious in hindsight, but market outcomes are a product of aggregate information on a grand scale. Exposures, compensation schemes, risk-sharing... These are not the kind of factors that are rewarded in a free-market (unless his definition of well-functioning is some variation of central control).

If you want to institute a change in the incentives within the finance and banking industry, just institute a very, very small transactions tax. This would decrease the viability of trading for its own sake (make enough trades fast enough, have a slightly more than random number of successes, and profit).

It wouldn't fix everything, but if you are a critic of the current structure of the industry, then a small transaction tax is the thing that is most likely to move the industry further away from the temptation to function like a glorified roulette wheel.

I'm not even going to address the problems with Dr. Taleb's proposed solution to problems in the banking industry (last three paragraphs of the article). It makes no sense.

[1] 2007 is the most recent data available. It is not the most up to date (the banking sector is likely a bit smaller now), but it is worth using some actual facts and figures. Direct access here (industry code 522110 for commercial banking): http://factfinder.census.gov/servlet/DatasetMainPageServlet?...

[2] http://xkcd.com/605/


Respectfully, I think you are overanalyzing something that is very simple.

1) Humans are very good at cheating the system. In fact, we are built to cheat, and we have come up with amazing ways of cheating and scamming people out of their money. To ignore this is to ignore hard facts that are right in our face.

2) Most people are honest. But it only takes a few dishonest people to do a lot of damage. And with the right amount of money, you can get away with anything in this country.

3) Corruption is the most seductive activity you can engage in with your clothes still on.

4) It goes beyond conspiracy at this point - we can literally see money transferring hands. Is it not funny that almost everybody was against the bailouts, but the govt. green flagged them regardless? Is it also not a bit strange that every past Secretary of Treasury for the past few decades has been affiliated with Goldman Sachs?

As for free markets and regulation:

Do you believe in evolution? I do. Evolution gets a LOT of things wrong in the beginning, but in the end, it usually produces pretty ideal designs. Evolution is what happens when you have an environment free of artificial contraints.

Likewise, I think that the best economy will be produced by many mistakes, and learning from them. One thing I am pretty sure we will learn is that no corruptible entity should ever control an economy, because corruption is always the end result. (All hail our new robot overlords :D)

And I will just leave this here: http://en.wikipedia.org/wiki/Friedrich_von_Hayek I'm sure you have probably read up on this guy.


I appreciate your thoughtful reply. While I am making the transition to a web developer at the moment, I have a graduate degree in Economics and I have spent some time working in the investment industry (Main St, not Wall St). I would just like to share an alternative viewpoint with you:

1) I wouldn't think of it as "cheating the system", but I would think of it as rational self-interest. Given a set of constraints, people will optimize for the best personal outcome.

2) People maximize their utility (utility is a measure of happiness). Whether someone is honest or dishonest is nothing more than a preference input into their utility maximization functions.

What this means is that if the punishment for breaking some law is exceeded by the benefits (to the individual) of doing so, then that individual will break the law. Think about how many people casually drive 5 to 10 miles over the speed limit (or 20 to 30 down here in Texas).

With that said, whose dishonesty is responsible for the housing collapse:

* Were the homeowners dishonest for taking on loans they could not afford to pay back?

* Were the lenders and real estate agents dishonest for making the loans to the home owners?

* Were the finance guys responsible for thinking that risk could be mitigated using the tools of finance?

* Were the investors being dishonest for cheering on returns that were above market-average without considering the risk?

Painting a market with the brushes of corruption and dishonesty does very little to advance an understanding of how the event happened and what can be done to prevent it in the future.

4) Professionals with an understanding of the financial industry were generally in favor of the bailouts. The government listened to the professionals.

Goldman Sachs hires a lot of economists and finance guys, and they hire the people with the best resumes. One should hardly scream conspiracy if it turns out that some of these "cream of the crop" individuals end up working in the treasury. Statistically, the probability of it happening randomly is actually quite high.

"Likewise, I think that the best economy will be produced by many mistakes, and learning from them."

I completely agree with this.

"One thing I am pretty sure we will learn is that no corruptible entity should ever control an economy, because corruption is always the end result. (All hail our new robot overlords :D)"

There is no need for us to resort to robots. We simply have to understand that people are self-interested and build a system that channels this self-interest into outcomes that are efficient for society as a whole.

"And I will just leave this here: http://en.wikipedia.org/wiki/Friedrich_von_Hayek I'm sure you have probably read up on this guy."

Hayek was an amazing logician, but the application of his theories to the modern economy is unproductive. The questions that he asked have been answered by modern economics in the 50 years that have passed since he asked them.

I consider myself an advocate for breaking down the wall that exists between modern economics and people that would like to increase their understanding of modern economics without being economists themselves. If you have anything that you would like to ask me about anything that I wrote above (or any of Hayek's specific points), then I would be happy to answer them (regardless of the beating that my karma takes).


Thank you for your thoughtful response to my response :)

As for who was responsible - I agree that finger-pointing and conspiracy does not really get us anywhere. In fact conspiracy is the least of my concerns...I am more concerned about the money that is being handed out in plain sight.

When it comes down to it, you cannot pin responsibility on any one party, but one thing did happen as a result, and some of this result was intentionally manufactured. That end result is the poor/middle-class getting poorer, and a few (transparently) dishonest people getting richer. But worst of all, a huge blow was taken on the economy that is affecting everybody, including the (honest) wealthy classes.

You can trace the beginning of the recession back 15 years or so. The funny thing is that people try to point fingers at Democrats or Republicans exclusively, while both parties have done their fair share of damage. And even US citizens have done a lot of damage, but there is a difference between being reckless and being naive. The former describes the people who leveraged debt, the latter describes your average US home buyer.

When it comes down to it, maybe no one was singularly responsible, but that does not really matter. What matters is that wealthy owners of private organizations were given tax-payer money. Executives were given golden parachutes even though their companies were bankrupt - guess who paid for these? Tax payer money was funneled into private companies with the purpose of "saving" them, only to have them trade as junk stock within a few months. Honest people were given mortgages that would soon turn into foreclosures - no one's fault, but I think they should be the people bailed out, not these corporations.


"There is no need for us to resort to robots. We simply have to understand that people are self-interested and build a system that channels this self-interest into outcomes that are efficient for society as a whole."

Here's the difficulty that Hayek was pointing out in his book Road to Serfdom. Who gets to decide what the right outcomes are and whether they are efficient?


"Who gets to decide what the right outcomes are and whether they are efficient?"

Whether or not an outcome is "right" is subjective, and is more in the field of law and philosophy than economics.

Whether or not an outcome is efficient for society does not need to be determined as it can be measured. An outcome is efficient for society if it makes at least one individual better off without leaving any other individuals worse off. (This is Pareto efficiency).

Of course in the real world, there will almost always be winners and losers with any policy change. The solution is thus to make sure that the total net benefits of any policy change is positive.

The net benefits are the benefits (both implied and explicit) of a policy minus both the costs of doing the policy and the costs of potential gains that could have been had by pursuing alternative policies. Both the costs and the benefits should be aggregates that include the costs and the benefits to all parties affected by a policy.

One thing that I think is worth thinking about when you look at recent developments in the markets is this. From the time period of roughly the 1950s to roughly the 1980s this form of detailed cost-benefit analysis was popularly employed by both governments and private companies. (I am not actually this old, but I have heard this story multiple times from economists considerably more seasoned than me).

Unfortunately, starting in the 1980s, a more expedient form of cost-benefit analysis that focuses mostly on the immediate costs and benefits to the organization conducting the analysis began to dominate. This type of analysis was championed by the finance and accounting oriented economists that began to spring up at around this time. (Disagreement over this and other issues led to university economics departments around the nation dividing from business departments and finding a new home - and less funding - with the liberal arts and social sciences.)

It is expensive and time-consuming to conduct a thorough impact analysis, so I can understand why the more academic approach would fade away in favor of something more expedient. However, I think that a lot of the recent problems we see in governance can be traced to this fast-food economic analysis:

* Lack of investment in infrastructure? - Well, this report on my desk says that it will be expensive to fix those roads and the ones we have now seem to be holding up just fine

* Internet providers want to throttle bandwidth based on its source? - Makes sense, according to this report on my desk providing bandwidth obviously costs money - why not charge for it?

* A tax on transactions? - Well, this report on my desk says that would make it more expensive to trade stocks - there's no way this could be beneficial.

While the right answers to many of these questions can be found either in economic journals or by speaking with an independent consultant, policymakers rarely have the enthusiasm for a topic to dive so deeply.

I hope that soul-searching due to the financial meltdown brings the old approach back in favor. It is sorely missed.


> 1) I wouldn't think of it as "cheating the system", but I would think of it as rational self-interest. Given a set of constraints, people will optimize for the best personal outcome.

In doing so they did break a number of laws, isn't that "cheating"?

But despite the technicalities, the point of it is that any adult could look at the results of their actions and decide that because they would produce negative outcomes for society that they should stop doing them, regardless of laws.

If these were Montana hermits hiding from civilization I'd cut them a break but these are giant institutions demanding not only legal protection from people trying to reclaim their funds, but bailouts as they hold our economy hostage.

> With that said, whose dishonesty is responsible for the housing collapse: > * Were the homeowners dishonest for taking on loans they could not afford to pay back? > * Were the lenders and real estate agents dishonest for making the loans to the home owners? > * Were the finance guys responsible for thinking that risk could be mitigated using the tools of finance? > * Were the investors being dishonest for cheering on returns that were above market-average without considering the risk?

Yes.

But the bankers who saw the big-picture were more dishonest and more responsible than the mortgage pushers and the investors who both should have known it was too good to be true, and all of them more responsible than the consumers who we don't actually expect to be actuarial experts, who did what their bank and society were encouraging them to do.

But yes, to the degree that they couldn't read that fine-print and educate themselves appropriately, then lobby to change things, they are ultimately responsible as they are where the government derives its mandate - to the degree that it does.

> Painting a market with the brushes of corruption and dishonesty does very little to advance an understanding of how the event happened and what can be done to prevent it in the future.

The reality is corruption and dishonesty, painting that picture is the only reasonable thing. Yes, we do need to understand that humanity is rife with the willingness to lie for gain, but we don't have to embrace it and treat it as okay just because it's normal.

"Normal" in that context is a branch up-side the head and the other guy is "right". But we strive for more than that, and need to be held accountable when we hurt others by failing.

We need reality-based finance, which assumes every other player is Mallory or Eve. Like with security.

> Goldman Sachs hires a lot of economists and finance guys, and they hire the people with the best resumes. One should hardly scream conspiracy if it turns out that some of these "cream of the crop" individuals end up working in the treasury. Statistically, the probability of it happening randomly is actually quite high.

Not at all. You approach it like working for or against corruption is the flip of a coin. Most people who'd go into regulatory agencies for reasons the public would approve wouldn't be interested in an industry job after. The fact that there's such a crossover only goes to show the positions are being held by amoral defectors.


I don't understand where this number is coming from. According to the data from the 2007 Economic Census...

Then go check the SEC filings. If they refute Taleb's numbers, bring in the Census as corroborating data.

I think that it is disingenuous to paint the entire industry as being composed of individuals out to cheat the system. I think that it makes more sense to note the behavioral economics at work.

This is precisely what happened, though. The large investment banks went public and externalized their risks. 10 years of bonuses (privatized profit), then a big blowup (socialized loss). Was it planned that way? Does it matter?

By contrast, private partnerships never developed as much exposure or lost as much money, since they could not externalize risks to the same degree.

I bet you worked for an LLC, no?

There is no great, malevolent force working in finance. The industry was governed by (and possibly still is) some pretty lackluster regulations, but it is a questionable premise to paint the entire industry as willfully fraudulent.

Why not? Because you were once involved in investment management yourself, and you are a good & decent person, and therefore the industry as a whole must be comprised of similar people and follow similar patterns of behavior and judgement?

This is not really the case. It is more the case that the regulators are often underpaid (relative to what they could make in the private banking industry) and over-worked (due to staffing cuts). This creates an accelerated revolving door between the public and private spheres. This revolving door makes it very hard to get any good regulations out of our current institutions.

So you think paying regulators more would somehow auto-magically deal with regulatory capture? Seriously?


"Then go check the SEC filings."

Which ones? There are quite a few companies in the finance industry. If we are going to paint them as good guys and bad guys, then it is important to note that those are subjective criteria.

"This is precisely what happened, though. The large investment banks... Was it planned that way? Does it matter?"

Well, if your objective is to tar and feather people then I would suppose that it is important to find a villain.

However, if your objective is to understand what happened and how it could be prevented, then I think it is important to ask if something was planned or if it was a product of a flawed system. I can understand why you may disagree with me.

"Because you were once involved in investment management yourself, and you are a good & decent person, and therefore the industry as a whole must be comprised of similar people and follow similar patterns of behavior and judgement?"

No, actually quite the opposite. Please read my reply to another gentleman here:

http://news.ycombinator.com/item?id=2962088

If you have any questions, please ask me, I would be happy to discuss with you (regardless of the beating that my karma would likely take).

"So you think paying regulators more would somehow auto-magically deal with regulatory capture?"

There is no magic solution for regulatory capture, but the disparity in pay is a glaring problem. People respond to incentives.

Right now, many people view regulatory agencies as a career springboard to higher paying positions within the large investment banks. If the pay at the regulatory industries was more in line with what an individual with a similar background would earn elsewhere, then that would dramatically decrease this effect.

While we are talking about fixes, one more possible fix would be to dramatically restrict the creation of derivatives, because their complexity exceeds their usefulness as financial instruments at this point in time.

These are the types of things that would be useful to talk about. Dr. Taleb has both the knowledge and the breadth of experience to talk about them. Unfortunately, he chooses to focus the majority of his influence on labeling people within the banking industry as ethical and moral evildoers. (South African apartheid? Really? I find the implied analogy to be disgusting.)


Taleb says $2.2 trillion is for 5 years - which is ~5*505B (total payroll for finance as per your information). So, it looks ok to me.


Taking Spyro7's estimate of 6.2 million employees in the financial industry that comes out to $2,200,000,000,000 / 5 years / 6,200,000 people = $70,968/year salary per person. Sounds like a lot less of a robbery when you divide it out.


That is a bail-out salary from the tax payer! I'd say that's a pretty hefty robbery, I believe it is well above the average per-capita income, so for most people this is more than a year's wages handed to them by the government.


Guess that's what I get for doing math with no context. Thanks for the reminder to RTFA.


This is a fantastic post. It is well written, and I agree with many of the authors points. Far from being angry, I think that the majority of people reading here on HN actually appreciate this type of critical analysis, and I think that it is a shame that it had to be posted on a throwaway account. However, I would like to present some alternative viewpoints for a few of the issues brought up in this post.

"As a doctor, however, someone like this - a top professional at the peak of their career - would probably make about $400,000. Partners at big law firms commonly net a million a year. Investment bankers are making several million (post-crash!). Top management consultants easily clear $500,000. Even a top accountant - probably a partner at a big 4 firm - would make two, three, or four times as much."

Hold on a second. What is the point that you are trying to make in this post. You say that you are talking about comparing computer programmers as compared to other highly skilled professionals, but then you narrow your focus to the highest percentile in each category. How many lawyers are partners in a big law firm out of the total number of lawyers? How many of these top management consultants clear $500,000?

The top performers in every industry will always make a salary that is amazingly higher than the median. However, unless you know the exact distributions of the salaries in each industry you can not meaningfully compare top performers. What good is it to know that a certain lawyer makes a million dollars a year without knowing how probable that outcome is relative to some more dreary alternatives.

When I started reading your post, I started reading it with the expectation that you were talking about the general market for programmers. Then about halfway through, it seemed to me that you had switched to talking about the very highest performers in the highly skilled labor market. Well, if that is what we are going to be talking about, then we should focus on it.

Look, the highest performers in the computer programming field are no longer called computer programmers. They are called CEOs and there is a high likelihood that they are very, very well compensated relative to the best performers in many other industries. However, as I said earlier, it is pointless to throw around anecdotes about how this 99.999th percentile individual made millions or this one made billions.

So, let's get back down to earth, and try to find some passably good numbers (not perfect, but better than nothing) to use as comparison points. Let's look at some numbers that may be more relevant with what someone around the 50th percentile would experience. All of the following links display ranges for salaries in each field. No, they are not the best samples available, but they are better than going without any data whatsoever.

Note: As stated on the site - all compensation data shown are gross, national from the 10th to 90th percentile ranges.

Physicians: http://www.payscale.com/research/US/People_with_Jobs_as_Phys...

Note: Physicians must have several years of residency as well as an M.D, so a programmer would already have 5 to 9 years of experience compared to a physician that is just beginning.

Lawyers: http://www.payscale.com/research/US/Job=Attorney_%2f_Lawyer/...

Lawyers in the states typically need a J.D. before they can actually begin being a lawyer, and law school is very expensive. I should also note, that the gravy train is slowing down dramatically for lawyers - http://www.economist.com/node/18651114

Software Engineers: http://www.payscale.com/research/US/Job=Software_Engineer_%2...

Sr. Software Engineers: http://www.payscale.com/research/US/Job=Sr._Software_Enginee...

Sr. Business Analyst: http://www.payscale.com/research/US/Job=Sr._Business_Analyst...

System Admins: http://www.payscale.com/research/US/Job=System_Administrator...

Computer Programmers: http://www.payscale.com/research/US/Job=Computer_Programmer/...

Management Consultants: http://www.payscale.com/research/US/Job=Management_Consultan...

Investment Banking: http://www.payscale.com/research/US/Job=Associate_-_Investme...

Accountants: http://www.payscale.com/research/US/Job=Accountant/Salary/by...

Sr. Accountant (numbers look a little screwy here): http://www.payscale.com/research/US/Job=Senior_Accountant/Sa...

Looking at those numbers, it does not seem to me that there is anything particularly wrong with computer programming as compared to other highly skilled professions. As a matter of fact, given that one can become a programmer without needing additional certification, it seems to me, at least, that computer programming is a great field to be in.

"What \"programmers can get rich in startups\" really means is \"entrepreneurs can get rich in startups\", whether they're programmers or bricklayers."

What is the percentage of bricklayers that get rich creating a startup? I don't have the number offhand, but I do know that there is no "Silicon Valley equivalent" for bricklayers. If there was really that vibrant of a bricklayer startup industry, then, due to agglomeration, you would expect there to be at least a few geographical areas where there was a high concentration of bricklaying startup business being conducted (think something like Wall Street).

"I think it isn't. I think the country would be better off if MIT computer science students, like their neighbors at Harvard Law School, could dream of growing up to be President. And I think we'd all be better of if computer science wasn't just seen as a major for socially awkward nerds."

I agree completely, and, actually, I agree with many of your other points as well. That's the thing though, when you are talking about programmer respect it seemed as though confused several different "types" of respect - compensation, entrepreneurship, political pull. With regards to the first type of respect (compensation), I disagree with you because the data for compensation suggests that an alternative hypothesis may be true. With regards to the second type (entrepreneurship), I cannot definitively definitively say either way but neither can you because the data needed to compare the numbers of successful entrepreneurs in different industries does not seem to be readily available.

With regards to the third type (political pull), I agree with you, but I think that perhaps their are deeper things. I have some hypothesis:

1. Perhaps the skills that it takes to do well in politics in the U.S. are somewhat orthogonal to the skills that it takes to build a multi-million dollar software firm from nothing and run it? How could an engineer win an election where the campaigning generally consists of 5 second soundbites and smear campaigns?

2. Maybe the problem is the general youth of the industry. The software industry is in its infancy. Maybe, over time, as it grows deeper roots, it will acquire more political power and influence? This is a fairly likely hypothesis.

Finally, I would like to address one last point:

"When the government wants to bring in more workers from overseas - which obviously lowers salaries, and reduces job security - who do they bring in?"

The problem is actually not so obvious:

* Are the programmers entering the country working in the same exact fields and at the same levels of expertise as the programmers that are local? If this is not the case, then the impact on pre-existing salaries would be negligible.

* Are the programmers entering the country located in similar geographical areas to the programmers that are local? If this is not the case, then, again, you are not likely to see much of an impact.

* Do the programmers entering the country require additional training as compared to local programmers? If this is the case, then they would have lower compensation not because they are willing to work for less but because they are being compensated in the form of additional training.

* Is the industry rapidly growing? If this is the case, then it may be conceivable that existing programmers and programmers entering the country would both benefit as the growing industry has room for them both.

* Of course, one can always increase pay through artificial scarcity, but the problem with doing this is that it ends up costing society by resulting in a deadweight loss - consumer and producer benefits that are never obtained due to artificially high market prices.

* There are quite a few other things, but this post is now more than long enough, and I really need to get back to work.


If there was really that vibrant of a bricklayer startup industry, then, due to agglomeration, you would expect there to be at least a few geographical areas where there was a high concentration of bricklaying startup business being conducted (think something like Wall Street). Wait, think about that for a second.

There is a concentration of tech startups in SV because the internet lets them sell to the rest of the country without issue. Brick layers / construction are far less mobile because you need to be at the construction site to build a brick wall. Now nationwide startups represent a fairly small percentage of successful small businesses their advantage is how quickly they can go from multimillion dollar companies to multibillion dollar companies.

PS: Read the millionaire next door and you find a lot of people in the use that made a few million from those bricklaying startups. The main difference is it often took them 20 years to get where software companies got in 5.


Schwarzenegger made a ton of money in a literal bricklaying startup, which was a big part of his early success; he built a multimillion dollar business out of it in a timeframe more like 5 years than 20.


I didn't know that. From his bio:

"Bricklaying Business

In 1968, Schwarzenegger and fellow bodybuilder Franco Columbu started a bricklaying business. The business flourished both because of the pair's marketing savvy and increased demand following a major Los Angeles earthquake in 1971"

"By the age of 22, Schwarzenegger was a millionaire, well before his career in Hollywood."

http://arnoldaloisschwarzenegger.com/biography.html


FYI, that "marketing savvy" involved going to door to door and giving homeowners a free appraisal of the state of their chimneys, which would invariably collapse. Because apparently it's a lot easier to push over a chimney than it looks, especially if you are a bodybuilder. Schwarzenegger admitted this on the Johnny Carson show back in the 1980s.

http://blogs.ocweekly.com/navelgazing/2007/04/your_multitask...


"There is a concentration of tech startups in SV because the internet lets them sell to the rest of the country without issue."

I don't think so. If the Internet was all that was needed to explain this concentration, then that would not be a sufficient explanation because it would prompt a new question. If geographical proximity was due to the Internet, then why Silicon Valley vs some other location?

I think that it is more likely that there are a high concentration of tech startups in SV because the concentration of tech companies in SV offers positive externalities to firms that locate themselves in SV. This is not a very good article (even by wikipedia standards), but I think that it could help to paint a picture:

http://en.wikipedia.org/wiki/Economies_of_agglomeration

But then, you are right. If tech companies needed to be within a hundred miles of their customers in order to transact business with them, then they would not be able to benefit from agglomeration effects. But I think that there are probably other things that factor into as well - margins, competition, regulations, etc.

"Brick layers / construction are far less mobile because you need to be at the construction site to build a brick wall."

I agree, this is true. Perhaps I should have been more selective with my examples. One more thing that is also true is that a side effect of their lack of mobility is that it is more challenging for them to scale vis-a-vis the technology companies.

"PS: Read the millionaire next door and you find a lot of people in the use that made a few million from those bricklaying startups. The main difference is it often took them 20 years to get where software companies got in 5."

The OP cited bricklayers as an example of million-dollar startup companies not being unique to the computer industry. I think that your point about how challenging it is to scale more conventional "mobility-challenged" businesses is actually a very convincing argument against the idea that starting a computer company and starting a bricklayer company offer comparable potential rewards.

Holding all other factors constant, if bricklayer startups are more geographically limited than tech startups, then it is reasonable to hypothesize that tech startups have a higher probability of becoming million-dollar companies.

Also, with regards to the Millionaire Next Door, I think that it is worth repeating the old axiom: the plural of anecdote is not data.


I agree with most of what you are saying. However, SV still generates a smaller fraction of millionaires than most people in the tech world might think. Around 200,000 Americans made over a million dollars last year and most of them took fewer risks than the classic raman profitable startup does. A tiny fraction of startups generate billionaires but when you start looking at expected payout and risk it's harder to justify as anything other than the best option based on your current skills.


I think (and have no evidence to support this whatsoever) that the temperate climate may factor in to the concentration of bootstrapped ventures in SV as well.

My logic is that homelessness in somewhere like Juno Alaska would be FAR worse than being homeless in the Valley, so in that sense, a bootstrapped Valley startup with a short runway is more risk averse than that same startup somewhere with more extreme temperatures.


I'd like to add another point to this: in the case of doctors, you've also got to include the various types of doctors. (As the son of a general practitioner, I'm particularly sensitive about this.)

General practitioners make wildly less than the various specialists which abound in the medical profession. Now, GPs can certainly make a good living; in rich areas >$200,000 a year isn't uncommon. At the same time, for, say, dermatologists, >$200,000 is the average salary [1].

(I, for one, think this is backwards: GPs, both in private practices and as hospitalists, are the primary diagnosticians and almost certainly save more lives than any other kind of doctor.)

And don't even get me started on residency; doctors deserve every ounce of geld they get for putting up with that.

Anyone, not directly relevant to OP, but I think that saying, "hey, doctors and lawyers are rich, we should be too" is a bit off the mark. Many doctors are rich. Not all.


Why does medicine have the hazing ritual known as residency?


My wife is a physician, so let me chime in. It's not so much a hazing ritual but a very carefully constructed mentorship program. It's intense, and in some cases maybe overly so, but the goal is to immerse them into the environment, and to provide a lot of experience, while being carefully supervised by more experienced doctors. After three years of this, you come out really knowing you stuff.

In software, we don't really have structured learning like this, which is unfortunate. Something that would be great to have to really make us into a true profession.


Perhaps we will, once that software development has as much history and tradition as medicine does.


Don't we, to a small degree already? They are called implementations. Often over promised, under-staffed and under-scheduled time-wise. Long hours are spent, a truckload is learned, experience is gained, etc. Unfortunately, there doesn't seem to be a 3-year limit.


That's actually fairly common in fields where mistakes can be dangerous. Most places refer to it as apprenticeship, but it's fairly common in even less glamorous trades like electrical work, carpentry, etc.


The difference as far as I know, is that apprentices have normal working hours, while residents are sleep deprived to counterproductive levels and they could be barred from becoming a doctor at the end of it. If residents had normal hours like the rest of the doctors, I wouldn't nearly be nearly as apprehensive about it.


I'll hazard a guess that specialists tend to see patients with specific insurance-covered issues, and those issues are the expensive ones. GPs deal with everything and not all of those ailments are worth much, billing-wise, yet take the same amount of time.

Or, 1 hour of a GPs time will usually result in a lower-return ailment than a Specialists 1 hour of time.

Or the problem of the Craftsman vs. Assembly-Line Worker. It may be better product, but you can't make 'em as fast. In this case, the Assembly Line people only deal with expensive items.


Yeah, I think you're absolutely right. Insurance really drives this, too: HMOs, especially, are constantly pushing for GPs to receive less and less for a given procedure. Combined with the role insurance companies play in (effectively) deciding what a patient does or doesn't "need,* insurance post-1990 has been trying very hard to convince doctors to do a worse job.

My father, for example, had a practice for over 20 years in a small, working and lower-middle class town. He was notorious for taking patients in late and for long waiting times. And everyone wanted to see him. He was also notorious, as it were, for consistently finding and correctly diagnosing issues that other doctors failed to find. He talked to patients about the health in detail, with physicals often taking an hour or more. He would "forget" to charge folks who he knew were struggling to make ends meet.

For over ten years, this worked fine. My family was, frankly, rich, with yearly salary approaching $200,000 dollars at times. Starting in the 1990s, my father ran into HMOs, head-on, if you will. Over the next ten years, his monthly salary consistently decreases until his overhead exceeded his revenue, and was forced out of private practice.

Now, my Dad might not have been the model of an efficient capitalist, but he was a damn fine doctor. I, at least, think it's a damn shame that we live in a world where that kind of care is systematically eradicated.

So, yes, I think it is that GPs deal with "less expensive" issues, but that's also because they deal with every issue. The goal of a medical system should be to avoid entirely expensive issues. The fact that specialists are employed frequently enough (i.e., demand is high enough) that their labor is worth so much is a systematic failing of the medical system.


What happened to your father is just plain wrong.

Growing up, we had a family doctor that we went to for everything and he referred us to a specialist if need be, but otherwise, he handled everything (which was pretty rare, luckily).

I'm slowly trying to establish the same relationship with a new doctor here (well, technically a Nurse-Practictioner).


> How many lawyers are partners in a big law firm out of the total number of lawyers? How many of these top management consultants clear $500,000?

Even with doctors, to be solidly in the above $200,000 layer, you pretty much need your own practice established and smoothly running (with nurses, staff, and everything) -- not every M.D. out there has that, and the effort required to get there is equivalent to the effort required to start your own business (with 80-hour work weeks etc.)


Also guys, doctors are saving lives everyday. I would say as a programmer I do not have a that direct an impact on someone's life. I agree a doctor has to setup their own practice before they make that much money. They also go through a lot of school for a very long time. Comparing a programmer and doctor is not the same in my opinion.


Someone wrote the software for the equipment they use. I was recently at the hospital for surgery, and there were scant few pieces of equipment that didn't have some kind of programming in it.


And what % of people writing software write that kind of software? It's a really small number.

What % of doctors have an impact on the health of their patients? Almost all.

We are dealing with statistics here.


In percentages, sure. In the raw number of people, I imagine the numbers are a lot closer than you imagine.

As for the remaining percentage of people, we may not deal with their physical health, but our work can impact their financial or family's well being fairly easily. Some of the code I have written has touched billions of financial transactions that have decided the future of whether people will be able to buy a house, or a car, or any other line of credit.

Not trivial.


Maybe a small percent of software is written for life-critical systems. That's because software is a huge market. But a growing percent of life-critical systems rely on software. How many adverse outcomes (even death) are you willing to accept caused by programming failures?

You know, a big part of the prosecution's evidence in the Casey Anthony trial turned out to be false; it was due to faulty software. She could have been imprisoned due to a software bug! http://en.wikipedia.org/wiki/Casey_Anthony#Evidence

Now consider that anybody can call themselves a software engineer without even picking up a book. Next time your life hangs on the proper functioning of some computer system, think about that.


> They also go through a lot of school for a very long time

And that school is very expensive. Also, many highly-paid positions carry the risk of getting sued by the patient's side for having made a silly mistake (can happen to anyone) or even for things that were unavoidable but which the patient's side believes otherwise.


People should be paid according to the value they provide, not their need. (Correcting for basic necessities.)


That'll be ideal. But realistically most people are paid based on market/political factor.


Maybe not the software you write but software is used to run all sorts of things, such as medical equipment that could easily kill. Part of the articles claim is that it is the pervasiveness of software that warrants higher compensation, social status, etc.


Funnily enough, many political representatives in China are engineers.


However, look at the age of those politicians (at least in China). They became engineers in the 60's and 70's when China still had a planned economy. They tested well in the national exams, which meant they likely went to an engineering school. Likewise, the fact that they tested well meant they were put on the political path. I wouldn't necessarily draw the conclusion that they are politicians because of some advantage an engineering background gave them, it's more likely the card they drew.


India too. "If you are not an engineer, you are nothing" I was told that by a friend from India once.


I don't want to come off as arrogant but I should say this. People generally do not select engineering as a profession in India- it's one of the default professions for most of the middle class.

I have been involved with computers since I was 6, so for me it was a pure choice. But most of my friends, some of them working for big name SV places don't really love or care about software or technology. In fact, some of them actively hate their jobs. Unlike in America, where people actually go to computer science because they love it. In India it's just the default way for a better life.


+1. I would hate to classify any Indian politician as an engineer. More like chose the easiest career path to get a decent amount of respect + a degree and moved into politics.

Influence can buy you grades and admissions in India :/


IIRC many political figures in Latin America are doctors.


Indeed, here (Uruguay) it's, along with lawyers, the most respected profession.

Former (and probably future) president Dr. Tabaré Vázquez is an oncologist (and is an actual practitioner in between political campaigns).

So are several prominent politicians (my own political party representative, Dr. Daniel Radío, is a medical doctor as well)


Could that be because they don't have to win a media driven popularity contest to 'win' office?


Or that they have thousands of years of history where civil servitude is a scholastic meritocracy.


I am a recent coffeescript convert. When I first heard about coffeescript I dismissed it offhand as a pointless abstraction over javascript, which I judged to already be a high level language. (In retrospect, I am ashamed to say it, but I think that I found the syntax to be intimidating.) I then went on to continue, happily, writing thousands of lines of javascript.

Somewhere along the way, I came across Paul Grahams essay "Succinctness is Power" ( http://www.paulgraham.com/power.html ), and I thought about how my productivity could be increased if I had access to a language that offered me the ability to do more with less. That was when I decided to revisit Coffeescript.

For my first task, I decided to rewrite some of my simpler javascript code into Coffeescript just to get a feel for it. Initially, the things that had initially repulsed me were somewhat irritating:

* The lack of parenthesis and brackets - I doubted that it would be possible to maintain code readability without them

* The reliance on indentation

* The overall "strangeness" of the appearance of coffeescript code (to someone with a predominantly java background)

Here's the thing though. All of those objections are only surface deep. Once I actually started to code in Coffeescript, I found that I became completely comfortable with the syntax. As a matter of fact, I don't think that I could ever go back to conventional javascript again. Especially because I would lose (among other things):

* An elegant syntax for writing classes using javascript

* List comprehensions

* Elegant string interpolation (it's the little things that count)

It is easy to be skeptical about coffeescript if you natural lean away from things that are surrounded by hype, but, take it from me, coffeescript really is an extremely valuable tool that no web developer should be without. It is more than just a way to write pretty javascript. It is a powerful, flexible, and elegant language in its own right. Don't be turned off by its syntax, give it a try today!


"Functional programming is just the general notion of using more functions, and less computer-architectury things, to express your ideas."

Umm, no, that is not what functional programming is. Functional programming is, basically, programming without side-effects. A purely functional language will have variables that are completely immutable. This is a bit of a shift away from the way things are done in C or C++.

Speaking just for myself, I started into the programming world with Java (not counting QBasic), and I am finding it incredibly hard to wrap my brain around the idea of functional programming.

My understanding is that most programmers that have a background primarily working with non-functional languages have a challenging time initially grasping the concepts of functional programming. However, I would guess that a programmer that learns a functional programming style early on would probably have very little difficulty understanding non-functional constructs.

For more information, I would like to point you towards some easy reading:

* http://en.wikipedia.org/wiki/Side_effect_(computer_science)

* http://learnyouahaskell.com/introduction#so-whats-haskell

Edit: To whoever down-voted me, I would appreciate some explanation of your views. The fact is that functional programming is not just using functions. The article plainly refers to functional programming languages and not simply using functions. There is a difference.


I don't ninja downvote, but if I had to guess I'd say it's because if you're finding functional programming incredibly difficult, you maybe shouldn't be lecturing on what it is.

As my cousin points out, it's true that it's easy to think of functions as just being macros in a language like C. If you call a function which doesn't have a return value, and which modifies some global state, that's not a "real" function, just a stored procedure.

However, if you write a C function which doesn't modify global state, but simply takes an input and returns an output, as even novice programmers do all the time, hey presto— that's the functional paradigm! There's nothing mystical about it, and what the GP is saying is that telling people who have done this that functional programming is something different is fundamentally confusing.


"I'd say it's because if you're finding functional programming incredibly difficult, you maybe shouldn't be lecturing on what it is."

Thank you for your thoughtful reply. I really do appreciate it.

It isn't the programming that is the hard thing. Writing some code to do something meaningful is not what is giving me a problem. Actually, it is trying to develop a deep understanding of functional programming that I am finding difficult. Maybe the problem is that I am confusing functional programming with purely functional programming?

My apologies to tumult if it came off that I was lecturing in my original post. I find it challenging to get tone perfectly right on the Internet. Face to face conversation or conversation over a phone is far better.

I think maybe this entire thread comes down to a question of semantics. When I program in non-functional languages I tend to lean towards using map/reduce type constructs a lot, but I never really thought of that as "functional programming". I guess I always thought that in order to really do functional programming, you really needed to have some kind of deep understanding of it.

Maybe if I had a formal computer science background, my perspective would be a little bit different because I would have more easily seen the point that you are making. As it stands, my background is in the social sciences and my knowledge of the science part of computer science is sometimes lacking.

Perhaps you are right, I should have been better informed before attempting to inform others. But if I had not typed my original post, then I would have never received your helpful reply and I would still be in the dark!


Maybe the problem is that I am confusing functional programming with purely functional programming?

Yeah, you nailed it. Writing a whole program with purely functional constructs is a little mind bending, since procedurally fundamental things like "printf" don't really exist. But that's just something that's much easier to do procedurally; it's hard for everyone, and there's no real secret to understanding it.

The important thing to remember is that languages don't have just one paradigm. An imperative language with first-class functions (as most high-level scripting languages have) is functional, just not "as" functional as, say, Haskell. You can use functional concepts in a language which isn't purely functional, just like you can use OO concepts in a language which isn't purely OO.

The good news is it sounds like you get functional programming much more than you thought, and you've definitely got the right attitude towards learning.

As long as you're open to being corrected, and honest about the limits of your knowledge, there's nothing wrong with trying to teach something you don't fully understand yourself. Teaching is one of the best ways to learn. Keep it up :)


+1. I fat thumbed you and downvoted, so sorry!


  My understanding is that most programmers that have a background primarily working with non-functional languages have a challenging time initially grasping the concepts of functional programming.
I'd guess that "most programmers that have a background primarily working with non-functional languages" could just be shortened to "most programmers". Take a look at any of the "language ranking" pages around and you'll find that they're all dominated by imperative/procedural languages.

  However, I would guess that a programmer that learns a functional programming style early on would probably have very little difficulty understanding non-functional constructs.
You're right; unfortunately, it seems that most CS curriculums these days just rush people through some basic programming classes in Java/Python (maybe C++) then through a few more "upper level" electives while just skimming over the algorithmic and data structures stuff that FP is most useful for.

I'd argue that the algorithmic / data structures material (maybe with an intro to some FP language mixed in) is much more important than those elective classes, because the elective stuff is much easier to pick up later on (as needed), while someone who doesn't know their algorithms could spend a whole career writing buggy, slow code because of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: