# Calculation & Analysis Articles

Whether you're trying to figure out the difference between covariance and correlation or breaking out a regression equation, our articles have the info you need.

## Articles From Calculation & Analysis

### Filter Results

Article / Updated 10-06-2022

After you estimate the population regression line, you can check whether the regression equation makes sense by using the coefficient of determination, also known as R2 (R squared). This is used as a measure of how well the regression equation actually describes the relationship between the dependent variable (Y) and the independent variable (X). It may be the case that there is no real relationship between the dependent and independent variables; simple regression generates results even if this is the case. It is, therefore, important to subject the regression results to some key tests that enable you to determine if the results are reliable. The coefficient of determination, R2, is a statistical measure that shows the proportion of variation explained by the estimated regression line. Variation refers to the sum of the squared differences between the values of Y and the mean value of Y, expressed mathematically as R2 always takes on a value between 0 and 1. The closer R2 is to 1, the better the estimated regression equation fits or explains the relationship between X and Y. The expression is also known as the total sum of squares (TSS). This sum can be divided into the following two categories: Explained sum of squares (ESS): Also known as the explained variation, the ESS is the portion of total variation that measures how well the regression equation explains the relationship between X and Y. You compute the ESS with the formula Residual sum of squares (RSS): This expression is also known as unexplained variation and is the portion of total variation that measures discrepancies (errors) between the actual values of Y and those estimated by the regression equation. You compute the RSS with the formula The smaller the value of RSS relative to ESS, the better the regression line fits or explains the relationship between the dependent and independent variable. Total sum of squares (TSS): The sum of RSS and ESS equals TSS. R2 is the ratio of explained sum of squares (ESS) to total sum of squares (TSS): You can also use this formula: Based on the definition of R2, its value can never be negative. Also, R2 can't be greater than 1, so With simple regression analysis, R2 equals the square of the correlation between X and Y. The coefficient of determination is used as a measure of how well a regression line explains the relationship between a dependent variable (Y) and an independent variable (X). The closer the coefficient of determination is to 1, the more closely the regression line fits the sample data. The coefficient of determination is computed from the sums of squares. These calculations are summarized in the following table. To compute ESS, you subtract the mean value of Y from each of the estimated values of Y; each term is squared and then added together: To compute RSS, you subtract the estimated value of Y from each of the actual values of Y; each term is squared and then added together: To compute TSS, you subtract the mean value of Y from each of the actual values of Y; each term is squared and then added together: Alternatively, you can simply add ESS and RSS to obtain TSS: TSS = ESS + RSS = 0.54 + 0.14 = 0.68 The coefficient of determination (R2) is the ratio of ESS to TSS: This shows that 79.41 percent of the variation in Y is explained by variation in X. Because the coefficient of determination can't exceed 100 percent, a value of 79.41 indicates that the regression line closely matches the actual sample data.

View ArticleArticle / Updated 09-22-2022

You can use the Central Limit Theorem to convert a sampling distribution to a standard normal random variable. Based on the Central Limit Theorem, if you draw samples from a population that is greater than or equal to 30, then the sample mean is a normally distributed random variable. To determine probabilities for the sample mean the standard normal tables requires you to convert to a standard normal random variable. The standard normal distribution is the special case where the mean equals 0, and the standard deviation equals 1. For any normally distributed random variable X with a mean and a standard deviation you find the corresponding standard normal random variable (Z) with the following equation: For the sampling distribution of the corresponding equation is As an example, say that there are 10,000 stocks trading each day on a regional stock exchange. It's known from historical experience that the returns to these stocks have a mean value of 10 percent per year, and a standard deviation of 20 percent per year. An investor chooses to buy a random selection of 100 of these stocks for his portfolio. What's the probability that the mean rate of return among these 100 stocks is greater than 8 percent? The investor's portfolio can be thought of as a sample of stocks chosen from the population of stocks trading on the regional exchange. The first step to finding this probability is to compute the moments of the sampling distribution. Compute the mean: The mean of the sampling distribution equals the population mean. Determine the standard error: This calculation is a little trickier because the standard error depends on the size of the sample relative to the size of the population. In this case, the sample size (n) is 100, while the population size (N) is 10,000. So you first have to compute the sample size relative to the population size, like so: Because 1 percent is less than 5 percent, you don't use the finite population correction factor to compute the standard error. Note that in this case, the value of the finite population correction factor is: Because this value is so close to 1, using the finite population correction factor in this case would have little or no impact on the resulting probabilities. And because the finite population correction factor isn't needed in this case, the standard error is computed as follows: To determine the probability that the sample mean is greater than 8 percent, you must now convert the sample mean into a standard normal random variable using the following equation: To compute the probability that the sample mean is greater than 8 percent, you apply the previous formula as follows: Because these values are substituted into the previous expression as follows: You can calculate this probability by using the properties of the standard normal distribution along with a standard normal table such as this one. Standard Normal Table — Negative Values Z 0.00 0.01 0.02 0.03 –1.3 0.0968 0.0951 0.0934 0.0918 –1.2 0.1151 0.1131 0.1112 0.1093 –1.1 0.1357 0.1335 0.1314 0.1292 –1.0 0.1587 0.1562 0.1539 0.1515 The table shows the probability that a standard normal random variable (designated Z) is less than or equal to a specific value. For example, you can write the probability that (one standard deviation below the mean) as You find the probability from the table with these steps: Locate the first digit before and after the decimal point (–1.0) in the first (Z) column. Find the second digit after the decimal point (0.00) in the second (0.00) column. See where the row and column intersect to find the probability: Because you're actually looking for the probability that Z is greater than or equal to –1, one more step is required. Due to the symmetry of the standard normal distribution, the probability that Z is greater than or equal to a negative value equals one minus the probability that Z is less than or equal to the same negative value. For example, This is because are complementary events. This means that Z must either be greater than or equal to –2 or less than or equal to –2. Therefore, This is true because the occurrence of one of these events is certain, and the probability of a certain event is 1. After algebraically rewriting this equation, you end up with the following result: For the portfolio example, The result shows that there's an 84.13 percent chance that the investor's portfolio will have a mean return greater than 8 percent.

View ArticleCheat Sheet / Updated 03-08-2022

When performing the many types of computations found in Finite Math topics, it’s helpful to have some numbers, notations, distributions, and listings right at hand.

View Cheat SheetCheat Sheet / Updated 02-16-2022

If you're looking at a business with an interest in investing in it, you need to read its financial reports. Of course, when it comes to the annual report, you don't need to read everything, just the key parts. Combining the annual report with some of the financial reports a corporation files with the Securities and Exchange Commission (SEC) can help you figure profitability and liquidity ratios and get a better sense of cash flow. Keep this handy Cheat Sheet nearby for a quick reference to reading financial reports, including SEC reports, profitability ratios, liquidity ratios, and cash flow formulas.

View Cheat SheetCheat Sheet / Updated 01-31-2022

Statistics make it possible to analyze real-world business problems with actual data so that you can determine if a marketing strategy is really working, how much a company should charge for its products, or any of a million other practical questions. The science of statistics uses regression analysis, hypothesis testing, sampling distributions, and more to ensure accurate data analysis.

View Cheat SheetArticle / Updated 06-20-2019

As with many areas and topics in finite mathematics, there is a very special and specific vocabulary that goes along with game theory. Here are some important and useful terms that you should know. Payoff matrix: A matrix whose elements represent all the amounts won or lost by the row player. Payoff: An amount showing as an element in the payoff matrix, which indicates the amount gained or lost by the row player. Saddle point: The element in a payoff matrix that is the smallest in a particular row while, at the same time, the largest in its column. Not all matrices have saddle points. Strictly determined game: A game that has a saddle point. Strategy: A move or moves chosen by a player. Optimal strategy: The strategy that most benefits a player. Value (expected value) of game: The amount representing the result when the best possible strategy is played by each player. Zero-sum game: A game where what one player wins, the other loses; no money comes in from the outside or leaves. Fair game: A game with a value of 0. Pure strategy: A player always chooses the same row or column. Mixed strategy: A player changes the choice of row or column with different plays or turns. Dominated strategy: A strategy that is never considered because another play is always better. For the row player, a row is dominated by another row if all the corresponding elements are all larger. For the column player, a column is dominated by another column if all the corresponding elements are all smaller.

View ArticleArticle / Updated 07-30-2018

When you encounter a matrix problem in finite math, a nice way to illustrate the transition from one state to another is to use a transition diagram. The different states are represented by circles, and the probability of going from one state to another is shown by using curves with arrows. The transition diagram in the following figure shows how an insurance company classifies its drivers: no accidents, one accident, or two or more accidents. This information could help the company determine the insurance premium rates. You see that 80% of the drivers who haven’t had an accident aren’t expected to have an accident the next year. Fifteen percent of those drivers have one accident, and 5% have two or more accidents. Seventy percent of those who have had one accident aren’t expected to have an accident the next year but have to stay in the one-accident classification. And those in the two-or-more accident class have to stay there. To create a transition matrix representing the drivers, use the percentages to show going from one state to another. What is the long-term expectation for these drivers? First, let the transition matrix be D. Then, some of the powers of D are At the end of ten years, using the drivers in the initial study, you have What this tells the insurance company is that, in ten years, about 11% of the original no-accident drivers will still not have had an accident. Only 3% of the one-accident drivers will still have had only that one accident. This situation doesn’t allow for the drivers to move back or earn forgiveness; a one-accident driver can’t be a no-accident driver using this model. Of course, different insurance agencies have different policies, putting drivers in better standing after a set number of accident-free years. And new policyholders are added to make this picture rosier. This just shows the pattern for a particular set of drivers after a certain number of years.

View ArticleArticle / Updated 07-30-2018

If your finite math instructor asks you to predict the likelihood of an action repeating over time, you may need to use a transition matrix to do this. A transition matrix consists of a square matrix that gives the probabilities of different states going from one to another. With a transition matrix, you can perform matrix multiplication and determine trends, if there are any, and make predications. Consider the table showing the purchasing patterns involving different cereals. You see all the percentages showing the probability of going from one state to another, but which of the cereals does the consumer actually end up buying most frequently in the long run? One way to look at continued purchasing is to create a tree diagram. In the following figure, you see two consecutive “rounds” of purchases. If you want the probability that the consumer purchases Kicks first, tries it again or something else, and then purchases Kicks the next time, add up the , , and branches: , or 38% of the time. If you want the probability that the consumer purchases Cheery A’s first, tries something else or repeats Cheery A’s, and then tries Corn Flecks, add up the , , and branches. This comes out to , or almost 26% of the time. The tree is helpful in that it shows you what the choices are and how the percentages work in determining patterns, but there’s a much easier and neater way to compute these values. To perform computations and study this further, create a transition matrix, referring back to the chart showing purchases and using the decimal values of the percentages. Name it matrix C. Next, use matrix multiplication to find C². As a quick hint, when multiplying matrices, you find the element in the first row, first column of the product, labeled c11, when you multiply the elements in the first row of the first matrix times the corresponding elements in the first column of the second matrix and then add up the products. In a matrix A, the element in the nth row, kth column is labeled ank. The element in the first row and second column of the product, c12, uses the elements in the first row of the first matrix and second column of the second matrix, and so on for the rest of the elements. So, you take the first row of the left matrix times the first column of the second matrix to get Yes. This is the same computation as was done using the tree to find the probability that a consumer starting with Kicks would return to it in two more purchases. Performing the matrix multiplication, you have Continuing this multiplication process, by the time C6 appears (the chances of buying a particular cereal at the fifth purchase time after the initial purchase), a pattern emerges. Notice that the numbers in each column round to the same three decimal places. This is going to become even clearer, using higher powers of C, until some nth matrix power becomes The matrix shows you the pattern or trend. No matter which cereal the consumer bought first, in the long run there’s a 35.3% chance that she’ll purchase Kicks, a 38.4% chance that she’ll purchase Cheery A’s, and a 26.3% chance that she’ll purchase Corn Flecks. This transition matrix has reached an equilibrium, where it won’t change with more repeated multiplication. You can write this situation with a single-line matrix:

View ArticleArticle / Updated 07-30-2018

On a finite math exam, you may be asked to analyze an argument with a visual approach using an Euler diagram. This pictorial technique is used to check to see whether an argument is valid. An argument can be classified as either valid or invalid. A valid argument occurs in situations where if the premises are true, then the conclusion must also be true. And an argument can be valid even if the conclusion is false. The following argument has two premises: (1) “All dogs have fleas.” (2) “Hank is a dog.” The conclusion is that, therefore, Hank has fleas. These arguments usually have the following format with the premises listed first and the conclusion under a horizontal line: Using an Euler diagram to analyze this argument, draw a circle to contain all objects that have fleas. Inside the circle, put another circle to contain all dogs. And inside the circle of dogs, put Hank. The figure illustrates this approach. The argument isn’t necessarily true, because you know that not all dogs have fleas. All this shows is that the argument is valid. If the two premises are true, then the conclusion must be true. Now consider an argument involving rectangles and triangles. A polygon is a figure made up of line segments connected at their endpoints. When analyzing the validity of this argument, the Euler diagram starts with a circle containing all polygons, as shown here. Two circles are drawn inside the larger circle—one containing rectangles and the other triangles. The two circles don’t overlap, because rectangles have four sides, and triangles have three sides. The argument is invalid. Rectangles are not triangles—not even sometimes. Arguments can have more than two premises. For example: One Euler diagram that can represent this situation has three intersecting circles, as shown here. As you can see from the diagram, there can be presidents born in Kentucky who were not lawyers in Illinois and there can be presidents who were lawyers in Illinois but not born in Kentucky. The argument is invalid. To be valid, it must always be true.

View ArticleArticle / Updated 07-30-2018

If your finite math instructor asks you to analyze a compound statement, you can try using a truth table to do this. Not every topic in a discussion can be turned into a compound statement and analyzed for its truth that way, but using logic and truth values is a good technique to use when possible. Consider the compound statement When constructing a truth table, you start with the basic p and q columns. Then you add a ~ q column followed by a column Before you can perform the conjunction, ^, you need a ~ p column. Here’s a step-by-step procedure. Start with a basic p and q and then add ~ q. When adding the column, perform the disjunction on the first and third columns. Remember, with disjunctions, the statement is false only when both component statements are false. Add the ~ p column. Add the column, which shows the conjunction of the fourth and fifth columns. The conjunction is true only when the two component statements are true. This complex statement is only true when both original statements are false.

View Article