Statistical Decision Theory - dummies

Statistical Decision Theory

By John Pezzullo

Statistical decision theory is perhaps the largest branch of statistics. It encompasses all the famous (and many not-so-famous) significance tests — Student t tests, chi-square tests, analysis of variance (ANOVA;), Pearson correlation tests, Wilcoxon and Mann-Whitney tests, and on and on.

In its most basic form, statistical decision theory deals with determining whether or not some real effect is present in your data. The word effect can refer to different things in different circumstances. Examples of effects include the following:

  • The average value of something may be different in one group compared to another. For example, males may have higher hemoglobin values, on average, than females; the effect of gender on hemoglobin can be quantified by the difference in mean hemoglobin between males and females.

    Or subjects treated with a drug may have a higher recovery rate than subjects given a placebo; the effect size could be expressed as the difference in recovery rate (drug minus placebo) or by the ratio of the odds of recovery for the drug relative to the placebo (the odds ratio).

  • The average value of something may be different from zero (or from some other specified value). For example, the average change in body weight over 12 weeks in a group of subjects undergoing physical therapy may be different from zero.

  • Two numerical variables may be associated (also called correlated). For example, if obesity is associated with hypertension, then body mass index may be correlated with systolic blood pressure. This effect is often quantified by the Pearson correlation coefficient.