The Meaning of the "p Value" from a Test
The end result of a statistical significance test is a p value, which represents the probability that random fluctuations alone could have generated results that differed from the null hypothesis (H0), in the direction of the alternate hypothesis (HAlt), by at least as much as what you observed in your data.
If this probability is too small, then H0 can no longer explain your results, and you're justified in rejecting it and accepting HAlt, which says that some real effect is present. You can say that the effect seen in your data is statistically significant.
How small is too small for a p value? This determination is arbitrary; it depends on how much of a risk you're willing to take of being fooled by random fluctuations (that is, of making a Type I error). Over the years, the value of 0.05 has become accepted as a reasonable criterion for declaring significance.
If you adopt the criterion that p must be less than or equal to 0.05 to declare significance, then you'll keep the chance of making a Type I error to no more than 5 percent.