Hacker News new | past | comments | ask | show | jobs | submit login
A First Course in Design and Analysis of Experiments (2010) [pdf] (umn.edu)
81 points by mindcrime on Sept 28, 2018 | hide | past | favorite | 8 comments



When I find an interesting pdf online and want to make 2 hours disappear, I just add "filetype:pdf site:" and throw it in the google search bar.

filetype:pdf site:users.stat.umn.edu/~gary/


>"One question of interest is whether the times are the same on average for the two workplaces. Formally, we test the null hypothesis that the average runstitching time for the standard workplace is the same as the average runstitching time for the ergonomic workplace."

Who would this be of interest to? I would never expect two workplaces to have exactly the same "runstitching time" (or anything else).

Also this is misstated, you would be testing if the two datasets are samples from distributions with the same average. Ie, the actual measured averages are not expected to be the same.


The "workplaces" could be just a large group from the same workplace was split into two groups, one for control and one to see the effects of ergonomics on their productivity. If that's the case, the variance in initial productivity would be less than if the two groups were taken from different factories.

Whether or not that's true, it reminds me of the experiments done near Hawthorne, Illinois around the time of the Great Depression to improve worker productivity by changing their environment. The workers output improved almost regardless of environmental changes. The conclusion was workers output improved because someone was paying attention to them. Henry Landsberger analyzed the experiments in the 1950s and coined the term the Hawthorne Effect. (Edit for grammar, spelling, and coffee)


>"workers output improved because someone was paying attention to them"

How much did it improve? All this test tells you is that there was some difference, it could be minuscule.


One of the desired outputs of an experiment analysis is an estimate of the size of the treatment effect, typically in the form of a confidence interval. Studies which only report statistical significance (and there are many of them) are of limited utility since you can have a tiny effect that is statistically significant, just as you can have a large effect that is not statistically significant!


In the context of experiment design, you do this because you want to know whether any apparent difference between the groups is plausibly an artifact of the random assignment of units to groups. You wouldn’t expect two groups to have identical behavior, but the goal of an experiment is to determine whether the difference is because of the treatment.


>"the goal of an experiment is to determine whether the difference is because of the treatment."

I doubt this is really the goal of an experiment like this. The goal would actually be to figure out whether changing their procedures/whatever would make them more money in the future than it would cost to do the change.


In this particular example, the "treatment" is implementation of ergonomic practices, and the "response" is worker productivity.

If we divide a workplace into two segments, there almost certainly will be some difference in worker productivity between the segments, having nothing to do with implementation of ergonomic practices and just having to do with the inherit variance of individual productivity contributing to the group.

If the difference is productivity is large enough, you start thinking the ergonomic practices might have something to do with it. Statistics makes these principles more precise through concepts like statistical power, significance, and confidence intervals. Proper analysis of such an experiment would allow a company to compare the cost of the ergonomic processes with the benefit of the increased productivity attributable to those processes. This in turn would indeed allow them to make more money!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: