Statistical bias is the systematic favoritism of certain individuals or certain responses in a study. Bias is the nemesis of statisticians, and they do everything they can to avoid it. Want an example of bias? Say you’re conducting a phone survey on job satisfaction of Americans; if you call people at home during the day between 9 a.m. and 5 p.m., you miss out on everyone who works during the day. Maybe day workers are more satisfied than night workers.
You have to watch for bias when collecting survey data. For instance: Some surveys are too long — what if someone stops answering questions halfway through? Or what if they give you misinformation and tell you they make $100,000 a year instead of $45,000? What if they give you answers that aren’t on your list of possible answers? A host of problems can occur when collecting survey data, and you need to be able to pinpoint those problems.
Experiments are sometimes even more challenging when it comes to bias and collecting data. Suppose you want to test blood pressure; what if the instrument you’re using breaks during the experiment? What if someone quits the experiment halfway through? What if something happens during the experiment to distract the subjects or the researchers? Or they can’t find a vein when they have to do a blood test exactly one hour after a dose of a drug is given? These problems are just some examples of what can go wrong in data collection for experiments, and you have to be ready to look for and find these problems so that it doesn’t become a systematic issue.