Hacker News new | past | comments | ask | show | jobs | submit | more norseboar's comments login

I've read all of the Stanford 183 notes that Blake Masters took, but not the book that just came out. Have any of you read both? Did you think the book was worth it if you already have the notes?


I read both. Class notes were more in-depth, had entertaining anecdotes but I found Zero To One to be more clear and concise explanation of Peter Thiel's thesis.

The diagrams are MUCH better in Zero To One - the class notes had slightly unwieldy prose to convey the same concepts without using diagrams at times.

All that said, I preferred the class notes. Reading Zero To One I personally felt I didn't get much from it that wasn't in the original notes. I find it a little surprising that it took a whole 18 months to have a rehash of the notes released.


Have any of you read both? Did you think the book was worth it if you already have the notes?

Yes and yes: http://jseliger.wordpress.com/2014/09/24/zero-to-one-peter-t...


The class notes, in my opinion, are better. You certainly don't need to read both.


Why did you find them better?


I liked the class notes better. Zero to One seems to target people who want the more important ideas in a denser format. Reading the book was a nice refresher two years after the notes, but not much in there is new.


I read it both. If you enjoy notes a lot, reading book might sense, but don't except tons of new stuff.

It is mostly written in a bit better structure and targets more general audience than class notes.


Link to the notes?


[deleted]


Here's how hyperlinks are actually intended to work:

http://blakemasters.com/peter-thiels-cs183-startup


All this advice looks excellent for running a startup, succeeding at it, and pitching to investors. However, Y-combinator's partners in particular have a habit of advertising "we don't care as much about your idea as about your team because the idea usually gets overhauled drastically anyway". This advice seems a bit contrary to that; at best, it's proof that you can go through the thought exercise of imagining what all of this looks like. But being able to do plan out a killer business with one idea doesn't necessarily translate to being able to do it with another.

Has YC changed its standards about people vs. ideas now that they can afford to be so selective? Or is planning a business a skill that's generic enough that proof of doing it in one instance is good enough to prove that a person can do it generally?


I think it's more about having an idea that makes sense and how you've been executing on it. Basically, when they say "the team matters the most", it's less about "Tell me how good you are" but more about "Tell me what you've been working on and how you tackled various problems, and then we'll decide how good you are".

I.e. If the idea makes no sense at all and you aren't talking to users or building a prototype to test it, it doesn't really matter how smart or amazing you say you are.


> However, Y-combinator's partners in particular have a habit of advertising "we don't care as much about your idea as about your team because the idea usually gets overhauled drastically anyway".

This is not true for any ycombinator startup I have worked at. The idea has always been key, but the team is also very important.


That certainly used to be an attitude Y Combinator had (http://old.ycombinator.com/noidea.html), although it sounds like that didn't work out. Paul Graham's essays also perennially mention that the people are far more important than the ideas ("Another sign of how little the initial idea is worth is the number of startups that change their plan en route.", "What matters is not ideas, but the people who have them. Good people can fix bad ideas, but good ideas can't save bad people"), but it might be that I've been projecting too much of PG's views onto YC as a whole.

It makes a lot of sense to me that the idea would absolutely be key, which is why I've always been a bit suspect of the "people >> ideas" attitude. Not to say that the team isn't the more important piece, but I'm suspicious of the idea that a great team will usually gravitate towards a good business, left to their own devices.


I've got a muse, and it works wonderfully (at least, compared to Neurosky sets). The most frustrating bit is that there's no API -- meditation is good and all, but I'd really like a sensor of that quality that one could hack with.


I have NeuroSky headphones... very hard to use and apps crash all the time if they pair at all. Thing is that, while I understand the benefit, for $300 it isn't likely I will spend it on something like this.


I also have a NeuroSky headset. It's a nice toy but not really useful.



I hope more research like this catches on in the mainstream to help dismantle the idea of a fixed "IQ" altogether. While it's convenient to score people on intelligence, and nice to believe there's a simple scalar ranking, there's a growing body of research showing that it's just not the case.

Even without the research, quantifying what counts as "general aptitude" and what doesn't is hard to do, and measuring whatever we decide to quantify is even harder. Here's to hoping that a more nuanced view of intelligences take off :)


>While it's convenient to score people on intelligence, and nice to believe there's a simple scalar ranking, there's a growing body of research showing that it's just not the case.

Did anybody ever really think it was? Or is this just a strawman put up by people who are opposed to IQ rankings on either personal (waah, I didn't score that high) or political (waah, group X doesn't score very high on average) grounds?

The idea of general intelligence makes about as much sense as the idea of general physical fitness. That is, quite a bit. It's useless to ask whether [famous basketballer] is more or less fit than [famous footballer], but we can usefully answer the question of whether [famous footballer] is more or less fit than [random dude].

Similarly, if we're an organisation that wants to select people for physical fitness (like, say, the US marines) we could make up a test and a score (http://en.wikipedia.org/wiki/United_States_Marine_Corps_Phys...) which in some way quantify an individual's physical fitness with a numerical score. The tests used are somewhat arbitrary and the precise ordering you get on this scale is different to the ordering you'd get if you used a different though equally sensible set of tests, but this doesn't change the fact that "physical fitness" is a real thing and that this test is a reasonable way of quantifying it. All reasonable measures of general physical fitness will give highly correlated results.


1. I thought IQ/g was always known to be an oversimplification (even the guy that invented it noted this)?

2. As this article points out, almost half of the performance of the various subsystems does seem to have a common source. So getting rid of IQ/g completely wouldn't make sense.

Now, pointing out specifically which other factors are most relevant for specific tasks would be useful (especially for things that require reading emotions, since that's least correlated). Finding which factors are least correlated is useful (so you know what's most important to check tasks for).


This is a bit of a tangent, but does "bringing in 'real' programmers" to clean up the mess really work? In my 1-2 years of experience (which isn't much), it seems like time onboarding into a big system (like Twitter) can take months, even for the all-stars. Is "we'll bring in the cavalry when shit hits the fan" a workable mindset for a startup?

(And this isn't trying to be glib, I'm genuinely curious -- I only have experience at a huge place that has tons of process for transfers and such).


There seems to be a connotation that asking a question "just to see if the candidate has seen the problem before" is bad, for some reason (apologies if this connotation wasn't there).

This is actually a very important thing to look for in an interview. While it might not be "fair" in the sense that not all smart people can effectively answer it, it's absolutely effective to sort out who has a good grasp of algorithms (and memoization in particular). There are too many smart people out there to interview just for that; you've got to look for existing knowledge too.


There are definitely times in your hiring process when you want to bias towards people with existing knowledge. The problem with this particular question is that it doesn't imply any in depth knowledge of dynamic programming. It is literally something you can see in a survey of algorithms course.

So by asking this question, I suspect you are biasing towards people who have recently taken a survey of algorithms course. I'm having a hard time envisioning the hiring situation where I'd want that bias.


While the link focuses on Ruby, I think there's a more general reason (which also explains why more languages don't support a lisp-like macro): cooperative development.

When you're writing code in a vacuum, you can fill it with all the metaprogramming you want. You probably have some preferred style, and your metaprogramming will reflect that. In something like lisp, this gets taken to the extreme -- code can look very different from what traditional lisp looks like when there are macros involved.

While this is great for developing solo, or with a small group, it becomes too much to handle as more people onboard. Every new contributor needs to learn your particular style, and how your macros work, and then how to apply them effectively. Its much easier when new functionality and expressiveness is added through a common format (like adding a new method).

As a lisp fan, those methods feel clumsy to me. But if you want to do something that's so big you need more contributors, it's worth the tradeoff.


Man, I hope it's still written in CL...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: