Hacker News new | past | comments | ask | show | jobs | submit login

> how do you do this, fundamentally? How would

there are still smaller pieces you can MVP to a smaller audience before launching it to the world.

> Google have manually mocked up their early product? How would

Crawl an intentional community (remember webrings?) or other small directed subset of the web and see if you're able to get better search results using terms you know exist in the corpus, rather than all of the Internet.

> Facebook?

They had Myspace as an example so the idea wasn't exactly unproven.

> Github?

Kernel developers were already using the software, successfully, all (for large values of "all") they did was bring it to a wider market with a better UI.

> Tesla for that matter?

People don't get this, but Tesla's biggest achievement to date, isn't the cars themselves, but the factory that they're built in. There's no way to MVP an entire factory, but building a car in a bespoke, pre-assembly fashion is totally possible and totally doesn't scale.

If you're asking if electric cars were known to work, the first one came out in 1832. If you're asking about product-market fit, they keep selling cars they haven't made yet, just to gauge demand. Aka where's my Cybertruck!?




> just to gauge demand

The hundreds of millions of USD as an interest free loan seemed more important than anything else.


> > Google have manually mocked up their early product? How would

> Crawl an intentional community (remember webrings?) or other small directed subset of the web and see if you're able to get better search results using terms you know exist in the corpus, rather than all of the Internet.

But that isn't a mock up, it's the real thing but on a smaller dataset. If you're going to do the real thing anyway, why not run it on all the data you can?

After all, the throttling factor to release is in the engine, not in the dataset. If you're going to write the full engine anyway, there's nothing to be saved by limiting it to a subset of the data.


> why not run it on all the data you can

Because more data requires more cleaning and standardization (with more edge cases). It also requires a bigger scale to obtain and process.


Most startups have a red-ocean indicator in the space to point at when telling people about their problem. Most startups fail.


are those remotely correlated though?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: