## Articles From Deborah J. Rumsey

### Filter Results

Article / Updated 06-12-2023

Hypothesis tests are used to test the validity of a claim that is made about a population. This claim that’s on trial, in essence, is called the null hypothesis (H0). The alternative hypothesis (Ha) is the one you would believe if the null hypothesis is concluded to be untrue. Learning how to find the p-value in statistics is a fundamental skill in testing, helping you weigh the evidence against the null hypothesis. The evidence in the trial is your data and the statistics that go along with it. All hypothesis tests ultimately use a p-value to weigh the strength of the evidence (what the data are telling you about the population). The p-value is a number between 0 and 1 and is interpreted in the following way: A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject it. A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject it. P-values very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the p-value so your readers can draw their own conclusions. This guide will offer valuable content on how to find a p-value from a test statistic, a crucial step when determining whether the observed data's standard deviation differs significantly from the null hypothesis. How to find a p-value from a test statistic When you test a hypothesis about a population, you find a p-value and use your test statistic to decide whether to reject the null hypothesis. Mastering the process of how to find a p-value from a test statistic is vital for identifying a statistical error in our hypothesis testing. A p-value chart can be extremely useful in visually interpreting the strength of evidence against the null hypothesis in your study. To find a p-value from a test statistic, you must reference a Z-table, find your test statistic on it, and determine its corresponding probability. The following figure shows the locations of a test statistic and their corresponding conclusions. Note that if the alternative hypothesis is the less-than alternative, you reject H0 only if the test statistic falls in the left tail of the distribution (below –2). Similarly, if Ha is the greater-than alternative, you reject H0 only if the test statistic falls in the right tail (above 2). To find a p-value with a test statistic: Look up your test statistic on the appropriate distribution — in this case, on the standard normal (Z-) distribution in the p-value charts (called Z-tables) below. Find the probability that Z is beyond (more extreme than) your test statistic: If Ha contains a less-than alternative, find the probability that Z is less than your test statistic (that is, look up your test statistic on the Z-table and find its corresponding probability). This is the p-value. (Note: In this case, your test statistic is usually negative.) If Ha contains a greater-than alternative, find the probability that Z is greater than your test statistic (look up your test statistic on the Z-table, find its corresponding probability, and subtract it from one). The result is your p-value. (Note: In this case, your test statistic is usually positive.) If Ha contains a not-equal-to alternative, find the probability that Z is beyond your test statistic and double it. There are two cases: If your test statistic is negative, first, find the probability that Z is less than your test statistic (look up your test statistic on the Z-table and find its corresponding probability). Then double this probability to get the p-value. If your test statistic is positive, first find the probability that Z is greater than your test statistic (look up your test statistic on the Z-table, find its corresponding probability, and subtract it from one). Then double this result to get the p-value. When you are wondering how to find a p-value with a test statistic, remember that the formula involves using your test statistic to identify a probability on a Z-table corresponding to the strength of evidence against the null hypothesis. You might ask, "Why double the probabilities if your Ha contains a non-equal-to alternative?" Think of the not-equal-to alternative as the combination of the greater-than alternative and the less-than alternative. If you’ve got a positive test statistic, its p-value only accounts for the greater-than portion of the not-equal-to alternative; double it to account for the less-than portion. (The doubling of one p-value is possible because the Z-distribution is symmetric.) Similarly, if you’ve got a negative test statistic, its p-value only accounts for the less-than portion of the not-equal-to alternative; double it to also account for the greater-than portion. For example, when testing Ho: p = 0.25 versus Ha: p < 0.25, the p-value turns out to be 0.1056. This is because the test statistic was –1.25, and when you look this number up on the Z-table (in the appendix) you find a probability of 0.1056 of being less than this value. If you had been testing the two-sided alternative, Ha: p ≠ 0.25, the p-value would be 2 * 0.1056, or 0.2112. If the results are likely to have occurred under the claim, then you fail to reject Ho (like a jury decides not guilty). If the results are unlikely to have occurred under the claim, then you reject Ho (like a jury decides guilty). The cutoff point between rejecting Ho and failing to reject Ho is another whole can of worms that I dissect in the next section (no pun intended). Making Conclusions After delving into the process of how to find a p-value from a test statistic and understanding its significance in hypothesis testing, we now transition to a critical stage: making conclusions. To draw conclusions about Ho (reject or fail to reject) based on a p-value, you need to set a predetermined cutoff point where only those p-values less than or equal to the cutoff will result in rejecting Ho. This cutoff point is called the alpha level (α), or significance level for the test. While 0.05 is a very popular cutoff value for rejecting Ho, cutoff points and resulting decisions can vary — some people use stricter cutoffs, such as 0.01, requiring more evidence before rejecting Ho, and others may have less strict cutoffs, such as 0.10, requiring less evidence. If Ho is rejected (that is, the p-value is less than or equal to the predetermined significance level), the researcher can say they've found a statistically significant result. A result is statistically significant if it’s too rare to have occurred by chance assuming Ho is true. If you get a statistically significant result, you have enough evidence to reject the claim, Ho, and conclude that something different or new is in effect (that is, Ha). The significance level can be thought of as the highest possible p-value that would reject Ho and declare the results statistically significant. Following are the general rules for making a decision about Ho based on a p-value: If the p-value is less than or equal to your significance level, then it meets your requirements for having enough evidence against Ho; you reject Ho. If the p-value is greater than your significance level, your data failed to show evidence beyond a reasonable doubt; you fail to reject Ho. However, if you plan to make decisions about Ho by comparing the p-value to your significance level, you must decide on your significance level ahead of time. It wouldn’t be fair to change your cutoff point after you’ve got a sneak peek at what’s happening in the data. Understanding how to get a p-value from a test statistic is essential for assessing whether the results of your test are likely to have occurred by chance, assuming the null hypothesis is true. However, this may lead you to wonder whether it’s okay to say “Accept Ho” instead of “Fail to reject Ho.” The answer is a big no. In a hypothesis test, you are not trying to show whether or not Ho is true (which accept implies) — indeed, if you knew whether Ho was true, you wouldn’t be doing the hypothesis test in the first place. You’re trying to show whether you have enough evidence to say Ho is false, based on your data. Either you have enough evidence to say it’s false (in which case you reject Ho) or you don’t have enough evidence to say it’s false (in which case you fail to reject Ho). Setting boundaries for rejecting Ho Knowing how to calculate a p-value from a test statistic is a crucial step in hypothesis testing, allowing you to determine whether your results are statistically significant. These guidelines help you make a decision (reject or fail to reject Ho) based on a p-value when your significance level is 0.05: If the p-value is less than 0.01 (very small), the results are considered highly statistically significant — reject Ho. If the p-value is between 0.05 and 0.01 (but not super close to 0.05), the results are considered statistically significant — reject Ho. If the p-value is really close to 0.05 (like 0.051 or 0.049), the results should be considered marginally significant — the decision could go either way. If the p-value is greater than (but not super-close to) 0.05, the results are considered non-significant — you fail to reject Ho. When you hear a researcher say their results are found to be statistically significant, look for the p-value and make your own decision; the researcher’s predetermined significance level may be different from yours. If the p-value isn’t stated, ask for it. Testing varicose veins In medical studies, such as those investigating varicose veins, the skill of applying statistical analyses, including understanding how to calculate p-value in statistics, in paramount. As an example of making a decision on whether to reject an Ho, suppose there's a claim that 25 percent of all women in the U.S. have varicose veins, and the p-value was found to be 0.1056. This p-value is fairly large and indicates very weak evidence against Ho by almost anyone’s standards because it’s greater than 0.05 and even slightly greater than 0.10 (considered to be a very large significance level). In this case you fail to reject Ho. You didn’t have enough evidence to say the proportion of women with varicose veins is less than 0.25 (your alternative hypothesis). This isn’t declared to be a statistically significant result. But say your p-value had been something like 0.026. A reader with a personal cutoff point of 0.05 would reject Ho in this case because the p-value (of 0.026) is less than 0.05. The reader's conclusion would be that the proportion of women with varicose veins isn’t equal to 0.25; according to Ha, in this case, you conclude it’s less than 0.25, and the results are statistically significant. However, a reader whose significance level is 0.01 wouldn’t have enough evidence (based on your sample) to reject Ho because the p-value of 0.026 is greater than 0.01. These results wouldn’t be statistically significant. Finally, if the p-value turned out to be 0.049 and your significance level is 0.05, you can go by the book and say because it’s less than 0.05, you reject Ho, but you really should say your results are marginal, and let the reader decide. This process, including understanding how to solve for p-value, illuminates the critical role of statistical analysis in deciphering intricate health studies and making informed decisions. The meticulous assessment of significance levels further emphasizes the value of context, demonstrating how varying criteria might lead to different interpretations.

View ArticleArticle / Updated 06-02-2023

In statistics, r value correlation means correlation coefficient, which is the statistical measure of the strength of a linear relationship between two variables. If that sounds complicated, don't worry — it really isn't, and I will explain it farther down in this article. But before we get into r values, there's some background information you should understand first. Today’s media provide a steady stream of information, including reports on all the latest links that have been found by researchers. For example, recently, I heard that increased video game use can negatively affect a child’s attention span, the amount of a certain hormone in a woman’s body can predict when she will enter menopause, and the more depressed you get, the more chocolate you eat, and the more chocolate you eat, the more depressed you get (how depressing!). Some studies are truly legitimate and help improve the quality and longevity of our lives. Other studies are not so clear. For example, one study says that exercising 20 minutes three times a week is better than exercising 60 minutes one time a week, another study says the opposite, and yet another study says there is no difference. If you are a confused consumer when it comes to links and correlations, take heart; this article can help. You’ll gain the skills to dissect and evaluate research claims and make your own decisions about those headlines and sound bites that you hear each day alerting you to the latest correlation. You’ll discover what it truly means for two variables to be correlated, when a cause-and-effect relationship can be concluded, and when and how to predict one variable based on another. Picturing a relationship with a scatterplot An article in Garden Gate magazine caught my eye: “Count Cricket Chirps to Gauge Temperature.” According to the article, all you have to do is find a cricket, count the number of times it chirps in 15 seconds, add 40, and voilà! You’ve just estimated the temperature in Fahrenheit. The National Weather Service Forecast Office even puts out its own “Cricket Chirp Converter.” You enter the number of cricket chirps recorded in 15 seconds, and the converter gives you the estimated temperature in four different units, including Fahrenheit and Celsius. A fair amount of research does support the claim that frequency of cricket chirps is related to temperature. For the purpose of illustration I’ve taken only a subset of some of the data (see the table below). Cricket Chirps and Temperature Data (Excerpt) Number of Chirps (in 15 Seconds) Temperature (Fahrenheit) 18 57 20 60 21 64 23 65 27 68 30 71 34 74 39 77 Notice that each observation is composed of two variables that are tied together: the number of times the cricket chirped in 15 seconds (the X-variable) and the temperature at the time the data was collected (the Y-variable). Statisticians call this type of two-dimensional data bivariate data. Each observation contains one pair of data collected simultaneously. For example, row one of the table depicts a pair of data (18, 57). Bivariate data is typically organized in a graph that statisticians call a scatterplot. A scatterplot has two dimensions, a horizontal dimension (the X-axis) and a vertical dimension (the Y-axis). Both axes are numerical; each one contains a number line. In the following sections, I explain how to make and interpret a scatterplot. Making a scatterplot Placing observations (or points) on a scatterplot is similar to playing the game Battleship. Each observation has two coordinates; the first corresponds to the first piece of data in the pair (that’s the X coordinate; the amount that you go left or right). The second coordinate corresponds to the second piece of data in the pair (that’s the Y-coordinate; the amount that you go up or down). You place the point representing that observation at the intersection of the two coordinates. The figure below shows a scatterplot for the cricket chirps and temperature data listed in the table above. Because I ordered the data according to their X-values, the points on the scatterplot correspond from left to right to the observations given in the table, in the order listed. Interpreting a scatterplot You interpret a scatterplot by looking for trends in the data as you go from left to right: If the data show an uphill pattern as you move from left to right, this indicates a positive relationship between X and Y. As the X-values increase (move right), the Y-values increase (move up) a certain amount. If the data show a downhill pattern as you move from left to right, this indicates a negative relationship between X and Y. As the X-values increase (move right) the Y-values decrease (move down) by a certain amount. If the data don’t seem to resemble any kind of pattern (even a vague one), then no relationship exists between X and Y. One pattern of special interest is a linear pattern, where the data has a general look of a line going uphill or downhill. Looking at the figure above, you can see that a positive linear relationship does appear between number of cricket chirps and the temperature. That is, as the cricket chirps increase, the temperature increases as well. In this article I explore linear relationships only. A linear relationship between X and Y exists when the pattern of X- and Y-values resembles a line, either uphill (with a positive slope) or downhill (with a negative slope). Other types of trends may exist in addition to the uphill/downhill linear trends (for example, curves or exponential functions); however, these trends are beyond the scope of this book. The good news is that many relationships do fall under the uphill/downhill linear scenario. It's important to keep in mind that scatterplots show possible associations or relationships between two variables. However, just because your graph or chart shows something is going on, it doesn’t mean that a cause-and-effect relationship exists. For example, a doctor observes that people who take vitamin C each day seem to have fewer colds. Does this mean vitamin C prevents colds? Not necessarily. It could be that people who are more health conscious take vitamin C each day, but they also eat healthier, are not overweight, exercise every day, and wash their hands more often. If this doctor really wants to know if it’s the vitamin C that’s doing it, she needs a well-designed experiment that rules out these other factors. Quantifying linear relationships using the correlation After the bivariate data have been organized graphically with a scatterplot (see above), and you see some type of linear pattern, the next step is to do some statistics that can quantify or measure the extent and nature of the relationship. In the following, I discuss correlation, a statistic measuring the strength and direction of a linear relationship between two variables; in particular, how to calculate and interpret correlation and understand its most important properties. Calculating the correlation coefficient (r) Above, in the section “Interpreting a scatterplot,” I say data that resembles an uphill line has a positive linear relationship and data that resembles a downhill line has a negative linear relationship. However, I didn’t address the issue of weak vs. strong correlation; that is, whether or not the linear relationship was strong or weak. The strength of a linear relationship depends on how closely the data resembles a line, and of course varying levels of “closeness to a line” exist. Can one statistic measure both the strength and direction of a linear relationship between two variables? Sure! Statisticians use the correlation coefficient to measure the strength and direction of the linear relationship between two numerical variables X and Y. The correlation coefficient for a sample of data is denoted by r. You might have heard this expressed as "interpreting correlation," an "r interpretation," or a "correlation interpretation." Although the street definition of correlation applies to any two items that are related (such as gender and political affiliation), statisticians use this term only in the context of two numerical variables. The formal term for correlation is the correlation coefficient. Many different correlation measures have been created; the one used in this case is called the Pearson correlation coefficient (but from now on I’ll just call it the correlation). The formula for the correlation (r) is where n is the number of pairs of data; and are the sample means of all the x-values and all the y-values, respectively; and and are the sample standard deviations of all the x- and y-values, respectively. Use the following steps to calculate the correlation, r, from a data set: Find the mean of all the x-values () and the mean of all the y-values (). Find the standard deviation of all the x-values (call it) and the standard deviation of all the y-values (call it). For each (x, y) pair in the data set, take x minus and y minus , and multiply them together to get . Add up all the results from Step 3. Divide the sum by Divide the result by n – 1, where n is the number of (x, y) pairs. (It’s the same as multiplying by 1 over n – 1.) This gives you the correlation, r. For example, suppose you have the data set (3, 2), (3, 3), and (6, 4). You calculate the correlation coefficient r via the following steps. (Note for this data the x-values are 3, 3, 6, and the y-values are 2, 3, 4.) x̄ is 12 ÷ 3 = 4, and ȳ is 9 ÷ 3 = 3. The standard deviations are Sx = 1.73 and Sy = 1.00. The differences found in Step 3 multiplied together are: (3 – 4)(2 – 3) = (– 1)( – 1) = +1; (3 – 4)(3 – 3) = (– 1)(0) = 0; (6 – 4)(4 – 3) (2)(1) = +2. Adding the Step 3 results, you get 1 + 0 + 2 = 3. Dividing by Sx * Sy gives you 3 ÷ (1.73 * 1.00) = 3 ÷ 1.73 = 1.73. Now divide the Step 5 result by 3 – 1 (which is 2), and you get the correlation r = 0.87. How to interpret r As mentioned above, in statistics, r values represent correlations between two numerical variables. The value of r is always between +1 and –1. To interpret r value (its meaning in statistics), see which of the following values your correlation r is closest to: Exactly –1. A perfect downhill (negative) linear relationship –0.70. A strong downhill (negative) linear relationship –0.50. A moderate downhill (negative) relationship –0.30. A weak downhill (negative) linear relationship 0. No linear relationship +0.30. A weak uphill (positive) linear relationship +0.50. A moderate uphill (positive) relationship +0.70. A strong uphill (positive) linear relationship Exactly +1. A perfect uphill (positive) linear relationship If the scatterplot doesn’t show that there’s at least somewhat of a linear relationship, the correlation doesn’t mean much. Why measure the amount of linear relationship if there isn’t much of one? However, you can think of this idea of no linear relationship in two ways: 1) If no relationship at all exists, calculating the correlation doesn’t make sense because correlation only applies to linear relationships, and 2) If a strong relationship exists but it’s not linear, the correlation may be misleading, because in some cases a strong curved relationship exists. That’s why it’s critical to check out the scatterplot (a kind of r value graph) first. The above figure shows examples of what various correlations look like, in terms of the strength and direction of the relationship. Figure (a) shows a correlation of nearly +1, Figure (b) shows a correlation of –0.50, Figure (c) shows a correlation of +0.85, and Figure (d) shows a correlation of +0.15. Comparing Figures (a) and (c), you see Figure (a) is nearly a perfect uphill straight line, and Figure (c) shows a very strong uphill linear pattern (but not as strong as Figure (a)). Figure (b) is going downhill, but the points are somewhat scattered in a wider band, showing a linear relationship is present, but not as strong as in Figures (a) and (c). Figure (d) doesn’t show much of anything happening (and it shouldn’t, since its correlation is very close to 0). Many folks make the mistake of thinking that a correlation of –1 is a bad thing, indicating no relationship. Just the opposite is true! A correlation of –1 means the data are lined up in a perfect straight line, the strongest negative linear relationship you can get. The “–” (minus) sign just happens to indicate a negative relationship, a downhill line. How close is close enough to –1 or +1 to indicate a strong enough linear relationship? Most statisticians like to see correlations beyond at least +0.5 or –0.5 before getting too excited about them. Don’t expect a correlation to always be 0.99 however; remember, these are real data, and real data aren’t perfect.

View ArticleCheat Sheet / Updated 01-19-2023

Statistics involves a lot of data analysis, and analysis is built with math notation and formulas — but never fear, your cheat sheet is here to help you organize, understand, and remember the notation and formulas so that when it comes to putting them into practice or to the test, you’re ahead of the game!

View Cheat SheetArticle / Updated 10-06-2022

If you know the standard deviation for a population, then you can calculate a confidence interval (CI) for the mean, or average, of that population. When a statistical characteristic that’s being measured (such as income, IQ, price, height, quantity, or weight) is numerical, most people want to estimate the mean (average) value for the population. You estimate the population mean, μ, by using a sample mean, x̄, plus or minus a margin of error. The result is called a confidence interval for the population mean, μ. When the population standard deviation is known, the formula for a confidence interval (CI) for a population mean is x̄ ± z* σ/√n, where x̄ is the sample mean, σ is the population standard deviation, n is the sample size, and z* represents the appropriate z*-value from the standard normal distribution for your desired confidence level. z*-values for Various Confidence Levels Confidence Level z*-value 80% 1.28 90% 1.645 (by convention) 95% 1.96 98% 2.33 99% 2.58 The above table shows values of z* for the given confidence levels. Note that these values are taken from the standard normal (Z-) distribution. The area between each z* value and the negative of that z* value is the confidence percentage (approximately). For example, the area between z*=1.28 and z=-1.28 is approximately 0.80. Hence this chart can be expanded to other confidence percentages as well. The chart shows only the confidence percentages most commonly used. In this case, the data either have to come from a normal distribution, or if not, then n has to be large enough (at least 30 or so) in order for the Central Limit Theorem to be applied, allowing you to use z*-values in the formula. To calculate a CI for the population mean (average), under these conditions, do the following: Determine the confidence level and find the appropriate z*-value. Refer to the above table. Find the sample mean (x̄) for the sample size (n). Note: The population standard deviation is assumed to be a known value, σ. Multiply z* times σ and divide that by the square root of n. This calculation gives you the margin of error. Take x̄ plus or minus the margin of error to obtain the CI. The lower end of the CI is x̄ minus the margin of error, whereas the upper end of the CI is x̄ plus the margin of error. For example, suppose you work for the Department of Natural Resources and you want to estimate, with 95 percent confidence, the mean (average) length of all walleye fingerlings in a fish hatchery pond. Because you want a 95 percent confidence interval, your z*-value is 1.96. Suppose you take a random sample of 100 fingerlings and determine that the average length is 7.5 inches; assume the population standard deviation is 2.3 inches. This means x̄ = 7.5, σ = 2.3, and n = 100. Multiply 1.96 times 2.3 divided by the square root of 100 (which is 10). The margin of error is, therefore, ± 1.96(2.3/10) = 1.96*0.23 = 0.45 inches. Your 95 percent confidence interval for the mean length of walleye fingerlings in this fish hatchery pond is 7.5 inches ± 0.45 inches. (The lower end of the interval is 7.5 – 0.45 = 7.05 inches; the upper end is 7.5 + 0.45 = 7.95 inches.) After you calculate a confidence interval, make sure you always interpret it in words a non-statistician would understand. That is, talk about the results in terms of what the person in the problem is trying to find out — statisticians call this interpreting the results “in the context of the problem.” In this example you can say: “With 95 percent confidence, the average length of walleye fingerlings in this entire fish hatchery pond is between 7.05 and 7.95 inches, based on my sample data.” (Always be sure to include appropriate units.)

View ArticleArticle / Updated 09-22-2022

You can calculate a confidence interval (CI) for the mean, or average, of a population even if the standard deviation is unknown or the sample size is small. When a statistical characteristic that’s being measured (such as income, IQ, price, height, quantity, or weight) is numerical, most people want to estimate the mean (average) value for the population. You estimate the population mean, by using a sample mean, plus or minus a margin of error. The result is called a confidence interval for the population mean, In many situations, you don’t know so you estimate it with the sample standard deviation, s. But if the sample size is small (less than 30), and you can’t be sure your data came from a normal distribution. (In the latter case, the Central Limit Theorem can’t be used.) In either situation, you can’t use a z*-value from the standard normal (Z-) distribution as your critical value anymore; you have to use a larger critical value than that, because of not knowing what is and/or having less data. The formula for a confidence interval for one population mean in this case is is the critical t*-value from the t-distribution with n – 1 degrees of freedom (where n is the sample size). The t-table The t*-values for common confidence levels are found using the last row of the t-table above. The t-distribution has a shape similar to the Z-distribution except it’s flatter and more spread out. For small values of n and a specific confidence level, the critical values on the t-distribution are larger than on the Z-distribution, so when you use the critical values from the t-distribution, the margin of error for your confidence interval will be wider. As the values of n get larger, the t*-values are closer to z*-values. To calculate a CI for the population mean (average), under these conditions, do the following: Determine the confidence level and degrees of freedom and then find the appropriate t*-value. Refer to the preceding t-table. Find the sample mean and the sample standard deviation (s) for the sample. Multiply t* times s and divide that by the square root of n. This calculation gives you the margin of error. Take plus or minus the margin of error to obtain the CI. The lower end of the CI is minus the margin of error, whereas the upper end of the CI is plus the margin of error. Here's an example of how this works For example, suppose you work for the Department of Natural Resources and you want to estimate, with 95 percent confidence, the mean (average) length of all walleye fingerlings in a fish hatchery pond. You take a random sample of 10 fingerlings and determine that the average length is 7.5 inches and the sample standard deviation is 2.3 inches. Because you want a 95 percent confidence interval, you determine your t*-value as follows: The t*-value comes from a t-distribution with 10 – 1 = 9 degrees of freedom. This t*-value is found by looking at the t-table. Look in the last row where the confidence levels are located, and find the confidence level of 95 percent; this marks the column you need. Then find the row corresponding to df = 9. Intersect the row and column, and you find t* = 2.262. This is the t*-value for a 95 percent confidence interval for the mean with a sample size of 10. (Notice this is larger than the z*-value, which would be 1.96 for the same confidence interval.) You know that the average length is 7.5 inches, the sample standard deviation is 2.3 inches, and the sample size is 10. This means Multiply 2.262 times 2.3 divided by the square root of 10. The margin of error is, therefore, Your 95 percent confidence interval for the mean length of all walleye fingerlings in this fish hatchery pond is (The lower end of the interval is 7.5 – 1.645 = 5.86 inches; the upper end is 7.5 + 1.645 = 9.15 inches.) Notice this confidence interval is wider than it would be for a large sample size. In addition to having a larger critical value (t* versus z*), the smaller sample size increases the margin of error, because n is in its denominator. With a smaller sample size, you don’t have as much information to “guess” at the population mean. Hence keeping with 95 percent confidence, you need a wider interval than you would have needed with a larger sample size in order to be 95 percent confident that the population mean falls in your interval. Now, say it in a way others can understand After you calculate a confidence interval, make sure you always interpret it in words a non-statistician would understand. That is, talk about the results in terms of what the person in the problem is trying to find out — statisticians call this interpreting the results “in the context of the problem.” In this example you can say: “With 95 percent confidence, the average length of walleye fingerlings in this entire fish hatchery pond is between 5.86 and 9.15 inches, based on my sample data.” (Always be sure to include appropriate units.)

View ArticleArticle / Updated 09-22-2022

You can find a confidence interval (CI) for the difference between the means, or averages, of two population samples, even if the population standard deviations are unknown and/or the sample sizes are small. The goal of many statistical surveys and studies is to compare two populations, such as men versus women, low versus high income families, and Republicans versus Democrats. When the characteristic being compared is numerical (for example, height, weight, or income), the object of interest is the amount of difference in the means (averages) for the two populations. For example, you may want to compare the difference in average age of Republicans versus Democrats, or the difference in average incomes of men versus women. You estimate the difference between two population means, by taking a sample from each population (say, sample 1 and sample 2) and using the difference of the two sample means plus or minus a margin of error. The result is a confidence interval for the difference of two population means, There are two situations where you cannot use z* when computing the confidence interval. The first of which is if you not know In this case you need to estimate them with the sample standard deviations, s1 and s2. The second situation is when the sample sizes are small (less than 30). In this case you can’t be sure whether your data came from a normal distribution. In either of these situations, a confidence interval for the difference in the two population means is where t* is the critical value from the t-distribution with n1 + n2 – 2 degrees of freedom; n1 and n2 are the two sample sizes, respectively; and s1 and s2 are the two sample standard deviations. This t*-value is found on the following t-table by intersecting the row for df = n1 + n2 – 2 with the column for the confidence level you need, as indicated by looking at the last row of the table. To calculate a CI for the difference between two population means, do the following: Determine the confidence level and degrees of freedom (n1 + n2 – 2) and find the appropriate t*-value. Refer to the above table. Identify Identify Find the difference, between the sample means. Calculate the confidence interval using the equation, Suppose you want to estimate with 95% confidence the difference between the mean (average) lengths of the cobs of two varieties of sweet corn (allowing them to grow the same number of days under the same conditions). Call the two varieties Corn-e-stats (group 1) and Stats-o-sweet (group 2). Assume that you don’t know the population standard deviations, so you use the sample standard deviations instead — suppose they turn out to be s1 = 0.40 and s2 = 0.50 inches, respectively. Suppose the sample sizes, n1 and n2, are each only 15. To calculate the CI, you first need to find the t*-value on the t-distribution with (15 + 15 – 2) = 28 degrees of freedom. Using the above t-table, you look at the row for 28 degrees of freedom and the column representing a confidence level of 95% (see the labels on the last row of the table); intersect them and you see t*28 = 2.048. For both groups, you took random sample of 15 cobs, with the Corn-e-stats variety averaging 8.5 inches, and Stats-o-sweet 7.5 inches. So the information you have is: The difference between the sample means is 8.5 – 7.5 = +1 inch. This means the average for Corn-e-stats minus the average for Stats-o-sweet is positive, making Corn-e-stats the larger of the two varieties, in terms of this sample. Is that difference enough to generalize to the entire population, though? That’s what this confidence interval is going to help you decide. Using the rest of the information you are given, find the confidence interval for the difference in mean cob length for the two brands: Your 95% confidence interval for the difference between the average lengths for these two varieties of sweet corn is 1 inch, plus or minus 0.9273 inches. (The lower end of the interval is 1 – 0.9273 = 0. 0727 inches; the upper end is 1 + 0. 9273 = 1. 9273 inches.) Notice all the values in this interval are positive. That means Corn-e-stats is estimated to be longer than Stats-o-sweet, based on your data. The temptation is to say, “Well, I knew Corn-e-stats corn was longer because its sample mean was 8.5 inches and Stat-o-sweet was only 7.5 inches on average. Why do I even need a confidence interval?” All those two numbers tell you is something about those 30 ears of corn sampled. You also need to factor in variation using the margin of error to be able to say something about the entire populations of corn.

View ArticleArticle / Updated 08-10-2022

When you perform a hypothesis test in statistics, a p-value helps you determine the significance of your results. Hypothesis tests are used to test the validity of a claim that is made about a population. This claim that’s on trial, in essence, is called the null hypothesis. The alternative hypothesis is the one you would believe if the null hypothesis is concluded to be untrue. The evidence in the trial is your data and the statistics that go along with it. All hypothesis tests ultimately use a p-value to weigh the strength of the evidence (what the data are telling you about the population). The p-value is a number between 0 and 1 and interpreted in the following way: A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis. A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis. p-values very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the p-value so your readers can draw their own conclusions. Hypothesis test example For example, suppose a pizza place claims their delivery times are 30 minutes or less on average but you think it’s more than that. You conduct a hypothesis test because you believe the null hypothesis, Ho, that the mean delivery time is 30 minutes max, is incorrect. Your alternative hypothesis (Ha) is that the mean time is greater than 30 minutes. You randomly sample some delivery times and run the data through the hypothesis test, and your p-value turns out to be 0.001, which is much less than 0.05. In real terms, there is a probability of 0.05 that you will mistakenly reject the pizza place’s claim that their delivery time is less than or equal to 30 minutes. Since typically we are willing to reject the null hypothesis when this probability is less than 0.05, you conclude that the pizza place is wrong; their delivery times are in fact more than 30 minutes on average, and you want to know what they’re gonna do about it! (Of course, you could be wrong by having sampled an unusually high number of late pizza deliveries just by chance.)

View ArticleArticle / Updated 08-08-2022

You can use the z-table to find a full set of "less-than" probabilities for a wide range of z-values. To use the z-table to find probabilities for a statistical sample with a standard normal (Z-) distribution, follow the steps below. Using the Z-table Go to the row that represents the ones digit and the first digit after the decimal point (the tenths digit) of your z-value. Go to the column that represents the second digit after the decimal point (the hundredths digit) of your z-value. Intersect the row and column from Steps 1 and 2. This result represents p(Z < z), the probability that the random variable Z is less than the value Z (also known as the percentage of z-values that are less than the given z-value ). For example, suppose you want to find p(Z < 2.13). Using the z-table below, find the row for 2.1 and the column for 0.03. Intersect that row and column to find the probability: 0.9834. Therefore p(Z < 2.13) = 0.9834. Noting that the total area under any normal curve (including the standardized normal curve) is 1, it follows that p(Z < 2.13) + p(Z > 2.13) =1. Therefore, p(Z > 2.13) = 1 – p(Z < 2.13) which equals 1 – 0.9834 which equals 0.0166. Symmetry in the distribution Suppose you want to look for p(Z < –2.13). You find the row for –2.1 and the column for 0.03. Intersect the row and column and you find 0.0166; that means p(Z < –2.13)=0.0166. Observe that this happens to equal p(Z>+2.13). The reason for this is because the normal distribution is symmetric. So the tail of the curve below –2.13 representing p(Z < –2.13) looks exactly like the tail above 2.13 representing p(Z > +2.13).

View ArticleCheat Sheet / Updated 02-25-2022

This cheat sheet is for you to use as a quick resource for finding important basic statistical formulas such as mean, standard deviation, and Z-values; important and always useful probability definitions such as independence and rules such as the multiplication rule and the addition rule; and 10 quick ways to spot statistical mistakes either in your own work, or out there in the media as a consumer of statistical information.

View Cheat SheetCheat Sheet / Updated 02-23-2022

Statistics II elaborates on Statistics I and moves into new territories, including multiple regression, analysis of variance (ANOVA), Chi-square tests, nonparametric procedures, and other key topics. Knowing which data analysis to use and why is important, as is familiarity with computer output if you want your numbers to give you dependable results.

View Cheat Sheet