How the Central Limit Theorem Is Used in Statistics
The normal distribution is used to help measure the accuracy of many statistics, including the sample mean, using an important result called the Central Limit Theorem. This theorem gives you the ability to measure how much the means of various samples will vary, without having to take any other sample means to compare it with. By taking this variability into account, you can use your data to answer questions about a population, such as “What’s the mean household income for the whole U.S.?”; or “This report said 75% of all gift cards go unused; is that really true?” (These two particular analyses are made possible by applications of the Central Limit Theorem called confidence intervals and hypothesis tests, respectively.)
The Central Limit Theorem (CLT for short) basically says that for non-normal data, the distribution of the sample means has an approximate normal distribution, no matter what the distribution of the original data looks like, as long as the sample size is large enough (usually at least 30) and all samples have the same size. And it doesn’t just apply to the sample mean; the CLT is also true for other sample statistics, such as the sample proportion. Because statisticians know so much about the normal distribution, these analyses are much easier.