The Bias Variance Tradeoff
Core route · model intuition
Underfitting vs. overfitting as a complexity knob.
This explainer turns bias-variance decomposition into something you can feel. Adjust model complexity and watch training and validation errors pull apart, then bring them back together once the right level of flexibility becomes obvious.
At a glance
The bias-variance tradeoff is foundational but often abstract. This makes the decomposition visible: watch error components separate, then recombine as you tune complexity.
Start underfit, then slowly increase model complexity until training error drops while validation error starts climbing. The gap between them is the tradeoff made tangible.
About 10 to 15 minutes. Intermediate. It is the cleanest theory note before moving into double descent or more specialized evaluation routes.
Embed
Reading path
- Open the live interactive: https://kohnnn.github.io/interactive-explanation/bias-variance/
- Continue through Train, Test, and Validation Sets or Double Descent when you want the workflow and edge-case follow-ups
- Continue via Interactive or Visual Notes