Peeking at your data, and calculating the sample size you need for a test are separate statistical issues. I agree that peeking messes up significance levels :).
The point I was trying to make was you can decide to run a test with a very small sample (e.g. n = 5), and it will still have the level of type 1 power you set if you chose a significance level of .05.
> You need to make sure you have enough samples in order to know if you rejected the null hypothesis by chance.
You do this when you decide the significance level (e.g. .05). The value needed to reject, given a significance level, is a function of sample size.
The definition of Type 1 error on wikipedia has a good explanation of this:
The point I was trying to make was you can decide to run a test with a very small sample (e.g. n = 5), and it will still have the level of type 1 power you set if you chose a significance level of .05.
> You need to make sure you have enough samples in order to know if you rejected the null hypothesis by chance.
You do this when you decide the significance level (e.g. .05). The value needed to reject, given a significance level, is a function of sample size.
The definition of Type 1 error on wikipedia has a good explanation of this:
https://en.wikipedia.org/wiki/Type_I_and_type_II_errors