Hacker News new | past | comments | ask | show | jobs | submit login
There's more to Singularity studies than Kurzweil (sentientdevelopments.com)
62 points by fogus on Aug 24, 2010 | hide | past | favorite | 29 comments



I think Kurzweil, not unlike Chomsky, seems like a generally fascinating figure to discuss (early success, strong opinions). His popularity (both positive and negative) can be attributed to a fascination with the character more so than his exact position on one topic or another.

The one thing that I wish he would talk about more is his work in "nutrition, health, and lifestyle" [1]. He seems to make a lot of money off recommendations that aren't commonly accepted in the medical community (he, himself, ingests "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" every day). Why the disconnect with health practitioners? I don't know the guy, and shouldn't make judgement, but find it hard to avoid the thought that he's profiteering off of people's fears and ignorance.

1. http://en.wikipedia.org/wiki/Raymond_Kurzweil#Work_on_nutrit...


His primary list of accomplishments should relieve some of your fears. That much serial entrepreneurship places him securely in the 'comfortable' category, and his continuing work seems to clear him from a profiteering motive. His transhumanist ambitions indicate a deep and sincere belief in his method, which is possibly more than most other dietitians can avow. I think the largest problem with the disconnect is the nature of his interest - severe long term results that are difficult to support with standard research methods or testimonials.


Many pseudoscientific cranks are honest, earnest, and not at all profiteering. But they are still pseudoscientific cranks. What defines a crank is the practice of developing and advocating strong opinions despite a lack of supporting evidence, or in the face of conflicting evidence.


I'm not sure what you have to gain by planting a 'pseudoscientific crank' flag in a response here, but I'll address it anyways.

I don't follow Kurzweil's diet, and I haven't read any of his books, but to the best of my knowledge, Kurzweil invokes no chakras and no crystals. He doesn't promise unrealistic weight gain, and he doesn't seem to be advertising any harmful methods. He may be guilty of overselling the possible benefits of synthesizing several positive regimens (e.g., light exercise, low glycemic index, antioxidants, nutritional supplements), but, with the exception of alkaline water, I don't see any cogent criticism or glaring 'lack of supporting evidence'.

Would you care to supply a context for your tar, or did you just have an agenda?


I only meant to suggest that whether or not someone is intentionally "profiteering off of people's fears and ignorance" in andreyf's words isn't necessarily the best criterion by which to judge their credibility--and conversely, that your defenses of Kurzweil's honesty and earnestness miss the underlying point.


Your underlying point is that he isn't credible; andreyf's point is about profiteering. It's possible he wanted information on credibility, rather than the stated profiteering, but your paternalism has yet to supply any actual answers.


My underlying point isn't about Kurzweil at all. It's that sincerity doesn't excuse being a professional crank. Amateur cranks are bad enough, but professional cranks do benefit from "people's fears and ignorance" at their expense, no matter how sincere and well-intentioned they may be.


But this isn't about cranks. It's not even about Kurzweil! It's about Singularitarians that explicitly aren't Kurzweil, even.

You haven't made any points germane to the conversation here. If you'd like to place some actual points regarding Kurzweil as a crank, or tar all things Singularity with the crank label, or find an actual topic about cranks, your words may be well placed, but until then, your Gricean maxims may need some work.


Here's the conversation as I have understood it:

andreyf (http://news.ycombinator.com/item?id=1629954): Among other things, deems Kurzweil's nutritional theories (and commercial promotion thereof) as questionable.

You (http://news.ycombinator.com/item?id=1631254): Provided evidence of Kurzweil's sincerity and good motives to defend him from andreyf's criticism, as you perceived it.

Me (http://news.ycombinator.com/item?id=1631299): Illustrated (unclearly, I admit) a fallacy in your argument: Kurzweil's sincerity doesn't completely absolve him of being wrong, irrational, or even crackpottish about his nutrition.

I admit that I was unclear. What isn't unclear is how you responded--with repeated personal attacks, accusations of ulterior motives and "agendas", and a militant refusal to make any good faith attempt to understand anything I was trying to express. And now you pretend we weren't discussing an aside about Kurzweil's idiosyncratic nutritional views at all, as if you've forgotten how these comments were nested in the first place. You accuse me of wanting to "tar" Kurzweil, or "all things Singularity" (as if I'm blaspheming your god?) and then name-drop Grice to assure yourself of holding the intellectual high ground? Yeah, I know how these forum games are played as well as anyone. I just wanted you to know that I actually was trying to meaningfully communicate here.


You do make it quite clear you know how to play forum games...

I argued nowhere that Kurzweil is absolved of any putative wrongness, irrationality, or crackpottishness. You inserted a context that wasn't there and didn't do much to relate that context to anything else. In most books, fanatics are the ones that can't tell that not everyone is talking about their favorite topic. You could really use work on your maxim of relation.


My perception was that andreyf was concerned more with Kurzweil's possible crackpottishness than his sincerity. In that respect, you are the one who has failed to apply Grice's maxims (or, in less intellectually pretentious terms, "missed the point"). Continually insulting me doesn't change that.


Shit, sorry. I really tried to phrase it in a way to avoid trolling, but it looks like sometimes it's just in my blood.

I'm sure Kurzweil is a deeply honest and righteous man. I'm just conflicted at how the medical establishment (that is, all the doctors I've talked to), don't seem to advocate nearly the amount of supplements he does, by several orders of magnitude. Is Kurzweil really that far ahead of this character we imagine called "modern medical institutions", or is he just wrong (as Aristotle was about astrology)? It's hard to question whether or not you're wrong when you're making lots of money one way or another.


I wouldn't class your initial comment as a troll, and with a little bit of context, philwelch's commentary could have been pretty useful.

Kurzweil might not be so far ahead of "modern medical institutions", but it's easy to see why he differs from most doctors: he's doing math on a very different scale. Kurzweil's endgame is immortality via Singularity, and that affects his expected utility greatly. He actually gains when he spends two hours and ten dollars to gain one hour of expected lifespan when he has the expectation of eventually going infinite on lifespan via non-nutritional means. Most doctors, and most patients, don't expect the same eventual Singularity that Kurzweil is betting on, and without that expected payout, the supplements very likely seem ineffective in both time and cost efficiency. Most doctors seem hesitant to recommend courses of action that don't introduce significant qualitative (e.g., diabetes vs. no diabetes) or quantitative (orders of magnitude) returns.

(There's also the bit of cognitive dissonance where doctors are fine with heart transplants, but less fine with things that "artificially" extend lifespan or otherwise mess with our "natural" life.)


The disconnect is that a lot of people do not find the Singularity an appealing concept. Of his list Yudkowsky seems the only sane one to me.

"Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities."

My pure speculation is that many people feel a very understandable sense of unease, but it gets expressed as doubting the possibility of the singularity future. As opposed to accepting it as possible but advocating that we try to avoid it, or at the very least approach it with extreme caution.

Kurzweil, on the other hand, comes across to me as very Pollyana-ish. He seems to spend very little time considering what could go wrong with a Singularity, and much more time eagerly anticipating our impending god-like future.


The majority of people, who when told of the singularity, automatically dismiss it offhand only because it has that Hollywood "scifi ridiculous" sound to it, seem a lot less sane to me than anyone on that list.


Most people have spent their lifetime watching people completely fail at predicting even the immediate future. Dismissing predictions of the singularity as scifi bullshit isn't a lack of sanity, it's basic extrapolation.


So these people think they are better predictors of the future than all the people who failed, even though they probably never made a serious prediction themselves? They believe something is never going to happen just because it hasn't happened yet? Sounds highly irrational to me.


There is a MASSIVE difference between "you're wrong" and "I'm right". Calling the singularity bullshit in a casual conversation is an example of the prior.


That seems like a poor measure of 'sanity' to me. Dismissing the possibility of something because it sounds ridiculous doesn't make you crazy, just intellectually lazy at worst.

Actively campaigning for a future where humans lose their humanity without the least thought as to the potential dangers? Imprudent at best, crazy at worst.


It's definitely intellectually lazy. But dismissing something because it was also made into a bunch of films? Utterly failing to think for yourself and just reacting via knee-jerk because it's past an arbitrary mental line of what's 'acceptable' to consider? Certainly falls under 'crazy' (i.e. high irrationality), just not what most people think of when they hear 'crazy'.

And I certainly wouldn't characterize singularitarians as advocating a human-less future without considering potential dangers. Anyone associated with the Singularity Institute for one will at least claim to consider potential dangers.


I agree with your sentiments, except that I'd invert your sanity bitmap.

I'd place John von Neumann as having been extraordinarily sane, and one of the most productive minds in history. I'd put Eliezer as far crazier and more potentially destructive than Kurzweil — there's not much distance between "creating survivable Singularities" and Butlerian Jihad. At least Kurzweil did productive things before he decided to live forever and tell us all about his magnificent future. Yudkowsky's brother died, and then he decided that nobody should have to die again.


A decent portion of the names mentioned in that post are signed up for cryonics: Kurzweil, Hanson, Minsky, and Yudkowsky. It makes sense in hindsight. These people put a very high upper bound on what future technology can accomplish.

Sources:

Kurzweil and Minsky: http://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundation...

Minsky: http://www.alcor.org/AboutAlcor/meetsciadvboard.html#minsky

Hanson: http://www.overcomingbias.com/2008/12/we-agree-get-froze.htm...

Yudkowsky: http://lesswrong.com/lw/wq/you_only_live_twice/


This is a good list. It would be interesting to see a similar list of Singularity detractors. I know Jaron Lanier is one. His claim is that software in general is crap and won't be up to the task, even if the hardware is capable.

Kurzweil does not say much about software. He feels we will get everything we need by reverse engineering real brains. I think this is possible, but it's certainly not obviously true, and it's not a simple extrapolation. We could be sitting there in 2040 with gobs of computer power, and detailed maps of the brain, and just not understand how it all works. Look at the Genome today.

I saw Kurzweil speak in 2008. I thought he was compelling but certainly leaned heavily on hype about the exponential which I don't think is entirely justified: http://www.kmeme.com/2010/07/singularity-is-always-steep.htm...


I agree with your post in general. However, the linked article missed the point of the key assumption of singularity: the current progress of computing power is 'only' doubling annually because of the limits of human intelligence. When machines reach human intelligence, they will take shorter and shorter time to improve themselves.

If H1, the human-equivalent generation of machines, takes one year to create H2, which doubles H1's speed and computing power, then H2 would take only 0.5 years to create H3, which would take 0.25 years to create H4, and so forth. The math series sum up to two or that the whole big bang would only take twice the amount of time for one doubling cycle (assuming that the delay caused by physical limits is negligible). IF this actually holds, then the progress when H1 is created will be way faster than the simple exponential illustrated in the article.


I agree. I did not depict The Singularity itself in my graphs, because Kurzweil generally does not either. He instead shows projected exponentials as evidence for The Singularity happening: http://www.kurzweilai.net/the-law-of-accelerating-returns

My first point is really just that the exponential is not hyperbolic. It's tempting to see an exponential graphs and think the value is rising against some invisible wall, like it is going to infinity. But of course it is not.

My second point was between the linear-exponential shooting skywards or the log-exponential rising incrementally which is "more real"? In the case of computer capacity I claim the incremental one is real, it reflects how we experience the increases.

None of this really speaks to whether the Singularity will happen or not. I'm just saying if it happens it won't be because of some upward surge in computing capacity just prior to The Singularity. Because no such surge is predicted.


I am partial to von Neumann's definition, not rapture of the nerds. A reasonable definition of a singularity is a (technological) the effects of which nearly no one can predict. Examples, agriculture, the industrial revolution, the automobile, the computer. The Luddites predicted massive unemployment, not the incredible abundance we have today. The telephone was seen as a broadcast medium like radio, no device that remove space from personal communications. The computer, we all know what that did.

We dont even know what intelligence really is, so we cant predict what super intelligence is or will do. Will we become extinct? I seriously doubt it, just as most of us dont want our fellow creature to become extinct (except maybe rats an mosquitos). Will they become our curators, or will we have a symbiotic relationship with incredible consequences.

One consequence I do see is a huge augmentation in human intelligence with appliances that grow complexes directly into the brain. Imagine instance face and name recognition (my particular sub-optimal capability) or mathematical proof reasoning. Not that we think way faster, but that everything we want is at our fingertips. And built in virtual reality.

I think we are near the peak in our consumption of material goods. So the optimist in me sees a future free from meaningless work, where even the average Joe will have an IO of 200 or even 1000.


I'd like to address some of your statements if you don't mind, since I wouldn't want them spreading and used in place of actual thought...

First, let's do away with this "Rapture of the Nerds" business. It'd be funny if the comparison were at all valid: http://www.acceleratingfuture.com/steven/?p=21

Your first paragraph hints that you're more partial to Hanson's idea of having already undergone multiple Singularities that tremendously boost the economy.

> We dont even know what intelligence really is, so we cant predict what super intelligence is or will do.

Paraphrasing from Yudkowsky, an intelligent system is that which implements its goals. If a superintelligent entity has goals even remotely recognizable to us, we can predict what it will do quite easily. Even if you have no idea how Chess works, just by knowing it's a game you can predict both sides will try to win.

I'm with you on brain-machine interfaces that augment our intelligence, though I would definitely say I could think faster, if not better. Just with the example of an internal calculator would help tremendously; somewhere around the order of trillions of synapses fire if you try to multiply two three-digit numbers if your head.

> So the optimist in me sees a future free from meaningless work, where even the average Joe will have an IO of 200 or even 1000.

What do these figures even mean? One of the goals of a Yudkowsky-defined Singularity is that the AI (hopefully quickly) gets around to helping us augment our own intelligence, moving us along the path so that we can become something like how we presently are to ants, but with present us as the new ants. Not just all becoming Einstein.


The singularity (sorry, Singularity) has always struck me as religion in science-fiction garb: an end time is coming, eternal life is possible if you name a cryonics firm as the beneficiary of your life insurance policy, and we will soon enter either an unimaginable paradise or an unlivable hell where either humans will be totally extinct and replaced with robots or we will be immortal cyborgs, or maybe we'll "download our consciousness into computers", or something like that. There's remarkably little content, nor any real certainty, behind the hype.

It's perfectly reasonable to say that really interesting things are bound to happen when we fully understand the brain enough to build one ourselves. It's conjectural to say it'll be possible to build intelligences better than humans at the task of building intelligences (to significant degrees of recursion), and once you make that conjecture some sort of singularity is possible. (Not inevitable--there may indeed be unsurpassable barriers to any kind of intelligence. I'm talking about logical barriers like Gödel's incompleteness theorem, or the possibility that P != NP.) But there's no gain in "futurists" making empty predictions. I'd much rather hear from the people who actually try to build intelligences what they think is feasible, or the people who actually study neurology.


For a less explicitly "pro-Singularity" take that still studies it seriously, the main North American AI organization (AAAI) recently convened a panel to look into which potential major societal effects of advancing AI should be taken seriously by AI researchers, and what (if anything) should be done with regards to those possibilities: http://www.aaai.org/Organization/presidential-panel.php




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: