Type I vs. Type II Error
Question: Explain Type I and Type II errors and the trade-offs between them.
A Type I error (false positive) occurs when the null hypothesis is rejected when it is actually true. A Type II error (false negative) occurs when we fail to reject the null hypothesis even though the alternative is true.
Reducing (Type I error rate) makes it harder to detect an effect, which increases (Type II error rate). Thus, there's a trade-off between minimizing false positives and false negatives.
Context determines which error to minimize. For example, airport security prioritizes avoiding false negatives (missing threats), while the justice system aims to avoid false positives (wrongly convicting the innocent). In tech: Meta may prioritize avoiding false negatives in harmful content detection, while false positives may be more critical in ad targeting.
Type I error (typically denoted by alpha) is the probability of mistakenly rejecting a true null hypothesis (i.e., We conclude that something significant is happening when there's nothing going on). Type II (typically denoted by beta) error is the probability of failing to reject a false null hypothesis (i.e., we conclude that there's nothing going on when there is something significant happening).
The difference is that type I error is a false positive and type II error is a false negative. The trade-off between these two errors is that the decreasing probability of one type of error increases the probability of the other error due the threshold of rejecting the null hypothesis affecting both errors simultaneously. For example, decreasing alpha makes it harder to reject a null hypothesis (lower type I error) thereby increasing the type II error because you're less likely to detect a real effect.