Hacker News new | past | comments | ask | show | jobs | submit | nickkthequick's comments login

I believe he was quoting Trump, not Obama.


Seems tough to bet your life on that bit of logic


At this point I've read 2 stories that had little to nothing to contribute past those two tweets that Elon Musk wrote. They don't even link to the actual tweets so that I can read what other discussion might have happened/click the link that he has in his second tweet.

These types of stories are extremely frustrating, they're basically TMZ for tech. I feel like there are very few publications that are willing to publish real stories any more, not just the minimum work that will get clicks. I used to think the BBC was one of them.


You're saying that Asian countries have fewer accidents because people run red lights? Do you have a source on any of this?


I believe eric is referring to this kind of mentality: http://knowledge.allianz.com/mobility/transportation_safety/...


I'm familiar with the idea, but if people run a red light 100% of the time, that becomes the new norm and therefore what is safe and expected. Don't forget that we're driving around in metal boxes powered by explosions, that didn't always seem safe (and certainly wouldn't to people if we suddenly introduced them to a society hundreds of years ago) but now it does seem safe.


One study with a grand total of 50 subjects and we're already saying that readers absorb less on Kindles? While interesting, the headline is click-bait and is not something we can say without further study.


(Note: They don't link to the study in the article so naturally I cannot comment on the soundness of the study in question).

(EDIT After writing this comment the link was changed from the Guardian to the NYT, which provides more information, though the study has still not been published.)

Studies (especially psychological ones) with human subjects are very difficult and expensive to conduct, which is why sample sizes are often small. 50 is by no means an unusual size for an experiment of this sort.

An alternative is to use observational data, but it's very hard to differentiate between useful observational data and garbage observational data. Not only do you introduce a whole host of problems, but it's more difficult to parametrize the problems that are introduced. So you could easily create a study that boasts a large sample size, with a respectable p-value[0], but have no way of knowing which confounding variables were introduced during the data collection process.

A third option is self-reported data, which comes with an even bigger asterisk after it. For something like this, I'd much rather trust a controlled study of 50 than a self-reported survey of 300 (at that point you might as well post it as an 'Ask HN' and judge based on the comments!).

By contrast, while controlled experiments on human subjects are by no means unbiased or immune to confounding of variables during data collection, it's almost always easier either to limit these in a controlled setting or at least to parametrize them after they happen.

So in the end, this usually ends up being the best feasible option (not the best (theoretically) possible option), short of massively increasing funding to such studies.

[0] Which is usually the wrong way to look at studies anyway, but that's a separate topic of discussion


Do you take issue with the statistics of the study? Or do you just feel 50 seems like a small number in your gut?


Something did ping my radar, although it's hard to say because it's not published yet. What the news article says is:

But instead, the performance was largely similar, except when it came to the timing of events in the story. "The Kindle readers performed significantly worse on the plot reconstruction measure, ie, when they were asked to place 14 events in the correct order."

What I would like to know is: how many other performance measures did they test? How "significant" is "significantly worse"? If, say, they tested for 100 performance measures (unlikely, but I'm using a large number on purpose), then random chance means that there are likely to be some measures that are "significantly worse." If, on the other hand, they only tested 3 performance measures, then it's less likely to be random chance.

Basically, if you run an experiment and you test for a large number of things, you can't say much about the outliers. With large enough numbers, there are bound to be outliers. However, after you run such experiments, and you see those outliers, you can run more experiments to test if that was random chance, or if there really is some correlation there.


xkcd has a comic explaining the same thing.

http://xkcd.com/882/


While the XKCD comic has a lot of truth to it, it's mainly about many different individual experiments (as well as some poorly done ones.) When running large sets of correlations, standard operating procedure is to use one of several techniques to counteract this effect.


Each performance measure is a different individual experiment.


The Guardian article linked actually doesn't present the statistics of the study, which hasn't been published yet. Absent further information, critiquing the sample size sounds pretty reasonable to me.


You can't criticize the sample size without knowing the effect size


Withholding evidence isn't a defense against criticism. If you won't TELL ME your effect size, but you do tell me the sample size, I can certainly say, "I am skeptical of your conclusion, because of your sample size."


Criticizing the stats of an article that hasn't been published b/c of science news reporting on it is an exercise in madness.

Criticize the science news instead.


You should instead say "I am skeptical of your conclusion, because I don't know your effect size."


FWIW I've personally had the same experience. I realize that that's not really a proof of anything.


50 subjects and 1 short story


I upvoted you but personally I prefer this link (they worked hard on the article, I believe they deserve the ad revenue):

http://www.newyorker.com/reporting/2014/05/19/140519fa_fact_...


I don't know why people are so opposed to newspapers showing advertisements.

How are they supposed to operate their organizations without funds?

Or do people seriously think government funding of papers is the way to go?

I happen to think that state funded news papers would be worse than papers funded by advertisers. At least with advertisers you have many different sources of funding. With the state you have just one boss, and you can't print stories that would piss him off.


I wish more places would adopt Ars Technica's model where you can pay to not see ads. I think ad antipathy is a long-term trend the industry should adjust to but it would break the revenue model at some places – e.g. it costs more to for an online-only New York Times subscription than for online+paper because the business has been dominated by print advertisements for so long.


I wish this model would work, but I don't think it does in practice.

For one thing, it drives the value of the ads you do show way down to commodity prices. Think about it: no advertiser is going to spend big bucks on ads that are literally only shown to the cheapest and least engaged members of your audience.

It also misaligns the incentives. Now the ads function like the "nag screen" on old shareware apps. The site's revenue is tied to the ads being annoying enough that you'll want to get rid of them.


Good point: I think there could be a stable equilibrium on the other side but the transition might be impossible.


I too don't mind them collecting ad revenue, but I really hate clicking next on multi-page stories. There is no "page limit" on a web page and clicking through and reloading the entire page to get the next part is annoying...

http://www.newyorker.com/reporting/2014/05/19/140519fa_fact_...


To be fair, the parent comment linked to the printer version, not the "single page" version you have, which is probably the best link.

Some people actually do prefer pagination on long articles, by the way. I've seen A/B data.


That's not the case for the one in NYC: http://www.yelp.com/biz/general-assembly-manhattan


My sister did a back end web dev class in NYC this past fall and had a great experience. She learned a ton that she has been able to apply to her position doing QA at a startup. Just another anecdote to throw in the mix.


Probably they want to appeal more to people that are a slightly outside the bitcoin realm. They would be less likely to have coinbase accounts and they would get some bonus user acquisition while they're at it.


Yeah that's a good point. Although, we don't make it very clear, you can connect your Google account and then opt to only get changes from your Basecamp account. But yes, you would always need to sign in using your Google account. Thanks a lot for the feedback.


Perhaps create somekind of call to action that's more like this: 'Connect your Basecamp account', they end up by creating a normal account and then after that the first step is to connect their Basecamp account. This gives you a way to learn what webservice people would connect first. But hey, enough ideas. It's all about execution. Good luck!


We're super excited to launch this and itching to get some feedback. Please let us know here or through email (founders@overseer.io) and we can set you up with a free account in exchange for helping us test everything out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: