Biased vs Unbiased Estimator
Question: Why might an unbiased estimator not always be preferred over a biased one?
In statistical estimation, an unbiased estimator is one whose expected value equals the true value of the parameter being estimated. However, unbiasedness alone does not guarantee that an estimator is the best choice. In practice, unbiased estimators can have high variance, meaning that their values fluctuate widely from sample to sample. This can result in noisy or unreliable estimates, especially with small sample sizes.
By contrast, a biased estimator may systematically overestimate or underestimate the true value, but if its variance is low, it may still produce more accurate results on average.
This is where the concept of mean squared error (MSE) comes in: MSE = Variance + Bias². An estimator with a small bias but much lower variance can have a lower MSE and thus be more desirable in practice.
This trade-off is especially relevant in machine learning and applied statistics, where slightly biased models (e.g., regularized regressions) often outperform unbiased ones due to their stability and generalization ability. Therefore, in real-world settings, the overall accuracy and reliability of an estimator often outweigh the mathematical ideal of unbiasedness.