Per-comparison error rate
From Infogalactic: the planetary knowledge core
In statistics, per-comparison error rate (PCER) is the probability of a result in the absence of any formal multiple hypothesis testing correction.[1] Typically, when considering a result under many hypotheses, some tests will give false positives; many statisticians make use of Bonferroni correction, false discovery rate, and other methods to determine the odds of a negative result appearing to be positive.
References
- ↑ Lua error in package.lua at line 80: module 'strict' not found.