Showing posts with label margin of error. Show all posts
Showing posts with label margin of error. Show all posts

Tuesday, September 14, 2010

Sample Size

The size of a sample influences the cost of a study, as well as the usefulness of the results. A sample that is too small can exclude information. One too large is costly and cumbersome.



Often, researchers need to know the smallest sample that can be taken and yet still have estimates that are accurate.



Decision-makers first agree to the amount of error they will tolerate from the results. This is called the margin of error (E).



Along with margin of error, researchers also assign a critical value (C.V.) that is based upon the probability for extreme values in the population.



These two factors are combined with knowledge about the population's standard deviation (sigma) to reach a recommmended sample size.



n= [(C.V. * sigma) / E]^2



In order to apply the Central Limit Theorem, the common rule of thumb is a minimum sample size of 30. However, if the population is bell-shaped, it can be smaller.

Friday, July 23, 2010

Margin of Error

Margin of Error (E) is the error that can be tolerated when estimating a value.

For confidence intervals, it is calculated as the critical value multiplied by the standard error -

E = Crit Val * Std Err

First, you look up the critical value from the probability table (t or z), then you calculate the standard error. Multiply these together.

Margin of Error tells you how much 'cushion' to place on your estimated value.

This cushion will be larger or smaller depending on the critical value that the researcher has chosen.

However, to determine sample size (n), the margin of error is chosen, not calculated.

For example, a buyer wants to know the sample size needed to estimate the average cost of shoes. He needs the estimate to be within ten dollars of the true population mean.

In this case, you will use E=10 in the formula for solving sample size.

Alpha

Alpha is chosen and represents the level of error the researcher can tolerate. Alpha is the probability of rejecting a correct null hypotheses. It is also referred to as the rejection region.

Alpha corresponds with a critical value. It is graphically defined as a 'tail' region - that is, the diminishing area under a bell-shaped curve, that extends either left of a negative critical value, or right of a positive critical value. See an image at: rejection region.

Assuming that a hypothesis is true, then sample measurements are not expected to fall in this tail region, since it is a small area. When such a sample measurement occurs, it is unlikely, and, therefore, indicates that the hypothesis could be wrong. Researchers will reject an hypothesis if it falls into this alpha region.

However, these unlikely values do occur, even if they are less likely. When the hypothesis is rejected due to an unlikely sample measurement, when, in fact, the hypothesis is true, this is called "Type I error."

Popular alpha values are .01, .05, and .10.

If an alpha value of .10 is used, then type I error is 10% likely to occur.

The terms type I error and alpha are sometimes used synonymously, depending on context.

Critical Value

A critical value (C.V.) is a number that is used to make estimates and test hypotheses. Critical values always correspond to a probability.

This number represents the distance from itself to the center of a bell-shaped graph, either the z or t distribution. The area in this section represents the probability of the C.V.

For example, using the z distribution, the number 1.96 is 47.5% likely. When you also include -1.96, then the likelihood is doubled.

Alpha and Confidence Level are probabilities that correspond to critical values.