Alan Anderson

Alan Anderson, PhD is a teacher of finance, economics, statistics, and math at Fordham and Fairfield universities as well as at Manhattanville and Purchase colleges. Outside of the academic environment he has many years of experience working as an economist, risk manager, and fixed income analyst. Alan received his PhD in economics from Fordham University, and an M.S. in financial engineering from Polytechnic University.

Articles & Books From Alan Anderson

Cheat Sheet / Updated 03-10-2022
Summary statistical measures represent the key properties of a sample or population as a single numerical value. This has the advantage of providing important information in a very compact form. It also simplifies comparing multiple samples or populations. Summary statistical measures can be divided into three types: measures of central tendency, measures of central dispersion, and measures of association.
Cheat Sheet / Updated 12-21-2023
Statistics make it possible to analyze real-world business problems with actual data so that you can determine if a marketing strategy is really working, how much a company should charge for its products, or any of a million other practical questions. The science of statistics uses regression analysis, hypothesis testing, sampling distributions, and more to ensure accurate data analysis.
Article / Updated 03-26-2016
Healthcare is one area where big data has the potential to make dramatic improvements in the quality of life. The increasing availability of massive amounts of data and rapidly increasing computer power could enable researchers to make breakthroughs, such as the following: Predicting outbreaks of diseases Gaining a better understanding of the effectiveness and side effects of drugs Developing customized treatments based on patient histories Reducing the cost of developing new treatments One of the biggest challenges facing the use of big data in healthcare is that much of the data is stored in independent "silos.
Article / Updated 03-26-2016
A statistic is said to be robust if it isn’t strongly influenced by the presence of outliers. For example, the mean is not robust because it can be strongly affected by the presence of outliers. On the other hand, the median is robust — it isn’t affected by outliers. For example, suppose the following data represents a sample of household incomes in a small town (measured in thousands of dollars per year): 32, 47, 20, 25, 56 You compute the sample mean as the sum of the five observations divided by five: The sample mean is $36,000 per year.
Article / Updated 03-26-2016
Random variables and probability distributions are two of the most important concepts in statistics. A random variable assigns unique numerical values to the outcomes of a random experiment; this is a process that generates uncertain outcomes. A probability distribution assigns probabilities to each possible value of a random variable.
Article / Updated 03-26-2016
A probability distribution is a formula or a table used to assign probabilities to each possible value of a random variable X. A probability distribution may be either discrete or continuous. A discrete distribution means that X can assume one of a countable (usually finite) number of values, while a continuous distribution means that X can assume one of an infinite (uncountable) number of different values.
Article / Updated 03-26-2016
Two of the most widely used measures of association are covariance and correlation. These measures are closely related to each other; in fact, you can think of correlation as a modified version of covariance. Correlation is easier to interpret because its value is always between –1 and 1. For example, a correlation of 0.
Article / Updated 03-26-2016
Probability distributions is one of many statistical techniques that can be used to analyze data to find useful patterns. You use a probability distribution to compute the probabilities associated with the elements of a dataset: Binomial distribution: You would use the binomial distribution to analyze variables that can assume only one of two values.
Article / Updated 03-26-2016
Statistical software packages are extremely powerful these days, but they cannot overcome poor quality data. Following is a checklist of things you need to do before you go off building statistical models. Check data formats Your analysis always starts with a raw data file. Raw data files come in many different shapes and sizes.
Article / Updated 03-26-2016
Most datasets come with some sort of metadata, which is essentially a description of the data in the file. Metadata typically includes descriptions of the formats, some indication of what values are in each data field, and what these values mean. When you are faced with a new dataset, never take the metadata at face value.