Hacker News new | past | comments | ask | show | jobs | submit login
Peter Thiel’s CS183: Startup - Class 19 Notes Essay—Stagnation or Singularity? (blakemasters.tumblr.com)
61 points by r4vik on June 15, 2012 | hide | past | favorite | 9 comments



Hi - this is omri (founder of genomecompiler) - our kickstarter project is still awaiting amazon payments authorization and kickstarter authorization for the project. See http://www.youtube.com/watch?v=BLhU1RGTHN4 and http://www.youtube.com/watch?v=F8qcDQaY8Mw for details


Cool, they talked about our genomecompiler (http://genomecompiler.com) and the kickstarter project for glowing plants we're in the process of posting.

omri@genomecompiler.com


I know that de Grey has a rough outline to reach his "Methuselahrity", but does anyone know of a reference to other milestones needed to reach the singularity's state of radical abundance.

The notes indicate that they discussed the idea of milestones to the singularity, but I was wondering if anyone has sat down and really thought it through on how to get there.

Thanks!


Does anyone know the link to the Kickstarter project "that involves taking an oak tree and splicing firefly genes into it"?


(1) From a dialectic point of view, I think Kurwzeil makes an impressively compelling argument that the rate of progress has been exponential, historically (e.g. using others' milestones).

So it's striking that he doesn't argue for an underlying mechanism, nor for whether it can continue - or even mention it explicitly.

The mechanism seems to be similar to standing on the shoulders of giants - once an improved technology is developed, it can be used for improved search for other technologies. The search can then be faster, more efficient, can take place in new domains, with greater accuracy - whatever is the the nature of the improvement.

But there's another assumptions: that there will be more to discover, and with a constant density. But why should the frequency of potential discoveries be constant, such that if you seek faster, you'll find faster? To be clear, it does seem to be that way... I'm just wondering if there's an argument as to why it's that way, and why it will continue to be that way as we keep searching. There's the assumption of mediocrity - that we are not at a privileged center of the universe - but is there a better argument? For example, why shouldn't it be that discoveries become exponentially rarer? So that we have to keep searching faster and faster just to maintain a linear rate of progress.

EDIT: e.g. consider primes, infinite but become less frequent as you go.

(2) I also like Hofstadner's argument that it's not necessarily true that an intelligence is sufficient to understand itself e.g. a giraffe can't understand itself. Of course, we can divide-and-conquer, and create hierarchical understandings, such that we can understand one level in itself, by assuming the concepts below, and ignoring concepts above. But not all things can be neatly composed into modular hierarchies, such that each level is easily understood - though we are biased towards seeing those that can, because that is all that we can see. In other words, perhaps we can one day duplicate a human mind... yet not understand it.

(3) There's a fascinating thought in these class notes, that people assume stagnation, and don't like to generalize beyond extrapolating single variables. This is very pragmatic, because it's almost impossible to predict with any fidelity. But it results in a very interesting effect: people are very confident as they stride the same well-trod paths, only ever taking one step away from it, so only finding big wins transitively (by hill climbing). This means that just two or three steps off the beaten track can be miraculous improvements... all you have to do is find it, though that might take an enormous number of attempts.

EDIT (4) Re: exponential progress in software (cf. Moore's Law for hardware), there are dimensions of progress that seem to be exponential: the release rate of new software; the productivity due to using others modules (SOTSOG, esp open source); "software is eating the world" as more problems are solved with software; software is being used on more devices, in more places (eg mobile devices). One could argue this is cherry picking, and none of these are equivalent to Moore's Law - but exponential improvement only requires some kind of improvement, that can itself be built upon. NOTE: I don't have figures, so I don't know for sure whether the above are actually exponential, though "eating the world" seems to be. Also found this: http://multiverseaccordingtoben.blogspot.com.au/2011/06/is-s...


6ren: Perhaps the reason that discoveries don't seem to become exponentially rarer is that, in an infinite universe all possible probabilistic outcomes must occur. Thus, if there is a non-zero, finite chance of something being discovered it will and can be discovered in an infinite universe.

Each new discovery in such a universe will produce non-zero speculative probabilities of something new to discover -- resulting in an infinite stream of problems and solutions that present new problems.

It doesn't matter if the probabilities are speculative or eventually discovered to be real: when speculations are "proven" false, the proof of falsehood is itself a new discovery that also provokes further speculation.

I guess this would mean that technological progress is inevitable given an infinite universe where some technological progress has already occurred and where speculation or "imagination" or unguided thought exist.


>> e.g. consider primes, infinite but become less frequent as you go. Here is an alternative model:

Consider you have a container of numbers, starting with a single number inside, 1. Each turn, you can add any number of pairs of two numbers inside it (with replacement, so you can add two 1's), to discover more numbers.

The first turn you can add two 1's to discover 2. The next turn (with 1, 2) you can discover 3, 4, the turn after that (1, 2, 3, 4) you can discover (5, 6, 7, 8). The number of numbers you can discover in any single turn will continue to increase as you increase your number of discoveries.

I am thinking the density increases because with more discoveries you can form more permutations of existing knowledge.


Thinking further, primes become less frequent because they are defined that way - once a number is identified as prime, it wipes out all multiples of it.

Like the integers, the number of possible combinations of things is infinite. The issue is what proportion of them are "improvements". Perhaps an approach is to try to define what makes something an "improvement": if you brute search many combinations, how do you determine whether a combination is an "improvement" (and in which way)? Is it independent of how many we've looked at before (like primes)?


Can anyone link me to the kickstarter project they mentioned?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: