The evidence in the trial is your data and the statistics that go along with it. All hypothesis tests ultimately use a pvalue to weigh the strength of the evidence (what the data are telling you about the population). The pvalue is a number between 0 and 1 and is interpreted in the following way:
 A small pvalue (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject it.
 A large pvalue (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject it.
 Pvalues very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the pvalue so your readers can draw their own conclusions.
How to find a pvalue from a test statistic
When you test a hypothesis about a population, you find a pvalue and use your test statistic to decide whether to reject the null hypothesis. Mastering the process of how to find a pvalue from a test statistic is vital for identifying a statistical error in our hypothesis testing.A pvalue chart can be extremely useful in visually interpreting the strength of evidence against the null hypothesis in your study. To find a pvalue from a test statistic, you must reference a Ztable, find your test statistic on it, and determine its corresponding probability.
The following figure shows the locations of a test statistic and their corresponding conclusions.
Note that if the alternative hypothesis is the lessthan alternative, you reject H_{0} only if the test statistic falls in the left tail of the distribution (below –2). Similarly, if H_{a} is the greaterthan alternative, you reject H_{0} only if the test statistic falls in the right tail (above 2).
To find a pvalue with a test statistic:

Look up your test statistic on the appropriate distribution — in this case, on the standard normal (Z) distribution in the pvalue charts (called Ztables) below.

Find the probability that Z is beyond (more extreme than) your test statistic:

If H_{a} contains a lessthan alternative, find the probability that Z is less than your test statistic (that is, look up your test statistic on the Ztable and find its corresponding probability). This is the pvalue. (Note: In this case, your test statistic is usually negative.)

If H_{a} contains a greaterthan alternative, find the probability that Z is greater than your test statistic (look up your test statistic on the Ztable, find its corresponding probability, and subtract it from one). The result is your pvalue. (Note: In this case, your test statistic is usually positive.)

If H_{a} contains a notequalto alternative, find the probability that Z is beyond your test statistic and double it. There are two cases:
If your test statistic is negative, first, find the probability that Z is less than your test statistic (look up your test statistic on the Ztable and find its corresponding probability). Then double this probability to get the pvalue.
If your test statistic is positive, first find the probability that Z is greater than your test statistic (look up your test statistic on the Ztable, find its corresponding probability, and subtract it from one). Then double this result to get the pvalue.

You might ask, "Why double the probabilities if your H_{a} contains a nonequalto alternative?" Think of the notequalto alternative as the combination of the greaterthan alternative and the lessthan alternative. If you’ve got a positive test statistic, its pvalue only accounts for the greaterthan portion of the notequalto alternative; double it to account for the lessthan portion. (The doubling of one pvalue is possible because the Zdistribution is symmetric.)
Similarly, if you’ve got a negative test statistic, its pvalue only accounts for the lessthan portion of the notequalto alternative; double it to also account for the greaterthan portion.
For example, when testing H_{o}: p = 0.25 versus H_{a}: p < 0.25, the pvalue turns out to be 0.1056. This is because the test statistic was –1.25, and when you look this number up on the Ztable (in the appendix) you find a probability of 0.1056 of being less than this value. If you had been testing the twosided alternative, H_{a}: p ≠ 0.25, the pvalue would be 2 * 0.1056, or 0.2112.
If the results are likely to have occurred under the claim, then you fail to reject H_{o} (like a jury decides not guilty). If the results are unlikely to have occurred under the claim, then you reject H_{o} (like a jury decides guilty). The cutoff point between rejecting H_{o} and failing to reject H_{o} is another whole can of worms that I dissect in the next section (no pun intended).
Making Conclusions
After delving into the process of how to find a pvalue from a test statistic and understanding its significance in hypothesis testing, we now transition to a critical stage: making conclusions.To draw conclusions about H_{o} (reject or fail to reject) based on a pvalue, you need to set a predetermined cutoff point where only those pvalues less than or equal to the cutoff will result in rejecting H_{o}. This cutoff point is called the alpha level (α), or significance level for the test.
While 0.05 is a very popular cutoff value for rejecting H_{o}, cutoff points and resulting decisions can vary — some people use stricter cutoffs, such as 0.01, requiring more evidence before rejecting H_{o}, and others may have less strict cutoffs, such as 0.10, requiring less evidence.
If H_{o} is rejected (that is, the pvalue is less than or equal to the predetermined significance level), the researcher can say they've found a statistically significant result. A result is statistically significant if it’s too rare to have occurred by chance assuming H_{o} is true. If you get a statistically significant result, you have enough evidence to reject the claim, H_{o}, and conclude that something different or new is in effect (that is, H_{a}).
The significance level can be thought of as the highest possible pvalue that would reject H_{o} and declare the results statistically significant. Following are the general rules for making a decision about H_{o} based on a pvalue:
 If the pvalue is less than or equal to your significance level, then it meets your requirements for having enough evidence against H_{o}; you reject H_{o}.
 If the pvalue is greater than your significance level, your data failed to show evidence beyond a reasonable doubt; you fail to reject H_{o}.
Understanding how to get a pvalue from a test statistic is essential for assessing whether the results of your test are likely to have occurred by chance, assuming the null hypothesis is true. However, this may lead you to wonder whether it’s okay to say “Accept H_{o}” instead of “Fail to reject H_{o}.” The answer is a big no.
In a hypothesis test, you are not trying to show whether or not H_{o} is true (which accept implies) — indeed, if you knew whether H_{o} was true, you wouldn’t be doing the hypothesis test in the first place. You’re trying to show whether you have enough evidence to say H_{o} is false, based on your data. Either you have enough evidence to say it’s false (in which case you reject H_{o}) or you don’t have enough evidence to say it’s false (in which case you fail to reject H_{o}).
Setting boundaries for rejecting H_{o}
Knowing how to calculate a pvalue from a test statistic is a crucial step in hypothesis testing, allowing you to determine whether your results are statistically significant. These guidelines help you make a decision (reject or fail to reject H_{o}) based on a pvalue when your significance level is 0.05: If the pvalue is less than 0.01 (very small), the results are considered highly statistically significant — reject H_{o}.
 If the pvalue is between 0.05 and 0.01 (but not super close to 0.05), the results are considered statistically significant — reject H_{o}.
 If the pvalue is really close to 0.05 (like 0.051 or 0.049), the results should be considered marginally significant — the decision could go either way.
 If the pvalue is greater than (but not superclose to) 0.05, the results are considered nonsignificant — you fail to reject H_{o}.
When you hear a researcher say their results are found to be statistically significant, look for the pvalue and make your own decision; the researcher’s predetermined significance level may be different from yours. If the pvalue isn’t stated, ask for it.
Testing varicose veins
In medical studies, such as those investigating varicose veins, the skill of applying statistical analyses, including understanding how to calculate pvalue in statistics, in paramount.As an example of making a decision on whether to reject an H_{o,} suppose there's a claim that 25 percent of all women in the U.S. have varicose veins, and the pvalue was found to be 0.1056. This pvalue is fairly large and indicates very weak evidence against H_{o} by almost anyone’s standards because it’s greater than 0.05 and even slightly greater than 0.10 (considered to be a very large significance level). In this case you fail to reject H_{o}.
You didn’t have enough evidence to say the proportion of women with varicose veins is less than 0.25 (your alternative hypothesis). This isn’t declared to be a statistically significant result.
But say your pvalue had been something like 0.026. A reader with a personal cutoff point of 0.05 would reject H_{o} in this case because the pvalue (of 0.026) is less than 0.05. The reader's conclusion would be that the proportion of women with varicose veins isn’t equal to 0.25; according to H_{a,} in this case, you conclude it’s less than 0.25, and the results are statistically significant.
However, a reader whose significance level is 0.01 wouldn’t have enough evidence (based on your sample) to reject H_{o} because the pvalue of 0.026 is greater than 0.01. These results wouldn’t be statistically significant.
Finally, if the pvalue turned out to be 0.049 and your significance level is 0.05, you can go by the book and say because it’s less than 0.05, you reject H_{o}, but you really should say your results are marginal, and let the reader decide.
This process, including understanding how to solve for pvalue, illuminates the critical role of statistical analysis in deciphering intricate health studies and making informed decisions. The meticulous assessment of significance levels further emphasizes the value of context, demonstrating how varying criteria might lead to different interpretations.