DOI: https://doi.org/10.5281/zenodo.18669061
Prelude: The Offensiveness of Error
Error offends because it threatens status and predictability. In schooling and administration, it is treated as a moral weakness; in engineering, as a reliability risk; in public discourse, as a sign of untrustworthiness. Yet the demand for errorlessness is often a category mistake. A calculator should not err at addition. A scientific community should. A pilot should not confuse instruments. A research program should “confuse itself” regularly, because only surprise distinguishes discovery from repetition.
The ordinary language of error lumps together fundamentally different phenomena: a flawed inference, a noisy sensor, a taboo violation, a deviation from a social script, an exploratory move that fails, an outlier observation that eventually rewrites the theory. When we fail to distinguish these, we overcorrect; and overcorrection is a quiet route to stagnation.
“Without deviation from the norm, progress is not
possible.”
— Frank Zappa
This article is written for two audiences at once. The first is the engineer who knows, instinctively, that feedback requires an error term and that control without discrepancy is blind. The second is the institutional mind that wishes for frictionless consensus and therefore treats nonconformity as malfunction. Both intuitions are partially correct. The work is to place them in the right ontology.
Definitions and a Working Taxonomy
Error, Mistake, Blunder, and Deviation
We will use four terms with disciplined intent.
Error is a discrepancy between a target (truth, specification, norm, or goal) and an outcome.
Mistake is an error attributable to a decision procedure (choice of model, plan, parameter, or rule).
Blunder is a mistake with avoidable negligence or gross mismatch between competence and act.
Deviation is simply difference—it may be an error or it may be signal.
The critical claim is that the set of deviations is larger than the set of errors, and the set of errors is larger than the set of mistakes. Some deviations are the first symptom that the target was misdescribed.
Five Types of “Error”
For analytical clarity, we classify common cases into five families.
Logical error: invalid inference, contradiction, or misuse of implication.
Empirical error: a claim about the world that fails under evidence.
Measurement and instrument error: noise, bias, drift, quantization, sampling artifacts.
Normative “error”: deviation from a social convention, protocol, or expectation (not necessarily false).
Productive deviation: an anomaly that exposes model insufficiency, hidden variables, or new phenomena.
We will later show that “productive deviation” is not a rhetorical flourish but a structural feature of learning systems: variation is the substrate of selection, and discrepancy is the substrate of control.
Conformity as Error Manufacture
The most famous laboratory demonstration of socially induced misperception is Asch’s conformity paradigm, in which individuals conform to an incorrect majority judgment on an easy perceptual task (Asch 1951, 1955). The immediate lesson is not that humans are stupid, but that perception is not a private instrument; it is a socially conditioned output. Conformity is therefore a generator of error in the strict sense: it increases discrepancy between judgment and reality.
This matters beyond psychology. In organizations, the majority opinion often becomes a proxy for truth. In scientific communities, reputational gradients can cause hypothesis lock-in. In bureaucracies, consensus can function as a legitimacy machine that suppresses inconvenient observations. Conformity does not merely correlate with error; it can produce it by changing the cost function of reporting what one sees.
Minority influence and epistemic rescue
If conformity manufactures error, dissent can manufacture correction. The classical finding in minority influence research is not that minorities always win, but that consistent minorities can shift the private processing style of the majority toward more systematic evaluation (Moscovici 1980). The point is structural: a minority position acts as a perturbation that prevents premature convergence.
The analogy to learning systems is tight. A group without dissent is like a model trained only to minimize local loss: it converges quickly and confidently, and fails catastrophically when the environment shifts.
Serendipity: When “Wrong” Opens the World
Science and engineering histories contain a recurring motif: the product was not sought, the result was not predicted, the anomaly was initially an error, and only later did it become a discovery. Accounts differ in detail, but the epistemic pattern is stable. Merton formalized this as the serendipity pattern: an unanticipated observation becomes strategically fruitful because it reveals an underlying, unrecognized structure (Merton and Barber 2004).
In this sense, some “errors” are a form of involuntary exploration. A system is probing the boundary of its model, and the boundary pushes back.
Contingency without sufficiency
One must be careful. Most accidents are merely waste. Serendipity is not a license to be sloppy; it is an argument for maintaining an interpretive posture toward anomalies. The same observation can be thrown away as noise or cultivated as a signal. The difference lies in disciplined curiosity: the willingness to ask, “what assumption did this violate?”
Error as Control Variable in Cybernetics and Engineering
In control theory, the error signal is not embarrassment; it is the fundamental variable that drives correction. Let \(r(t)\) be a reference trajectory and \(y(t)\) the measured output. The error is \[e(t)=r(t)-y(t).\] If \(e(t)\equiv 0\) at all times, then either the system is perfectly controlled or (more commonly) the measurement is lying, the reference is trivial, or the system is not interacting with an environment that can surprise it. In real systems, error is expected; the question is whether the feedback loop transforms error into stability.
Wiener’s cybernetics made this explicit: goal-directed behavior requires feedback, and feedback requires discrepancy (Wiener 1948). Ashby sharpened the constraint: regulation requires variety sufficient to match disturbances—the Law of Requisite Variety (Ashby 1956). The regulator that cannot express alternative actions cannot reduce error; the organization that cannot tolerate dissent cannot correct itself.
Boundary failure
When organizations suppress error signals, they resemble unstable controllers that saturate or clip feedback. The resulting behavior is familiar: hidden drift, delayed recognition, and sudden collapse. Error signals do not disappear when ignored. They migrate into unmodeled channels.
Fallibility as a Condition for Learning
Bayesian updating and the necessity of surprise
A learning agent updates beliefs in proportion to prediction error. In Bayesian terms, evidence modifies priors through likelihood; in predictive processing language, the system minimizes prediction error through model revision and action. If observations never contradict predictions, no update occurs. A perfectly “right” system is epistemically inert because it never receives differential information.
Exploration, exploitation, and productive failure
In reinforcement learning, the exploration–exploitation dilemma formalizes a deep truth: optimal long-run performance requires non-optimal short-run actions. Exploration looks like error locally. Globally, it is insurance against model misspecification and nonstationary environments (Sutton and Barto 2018). To forbid exploration is to demand that an agent behave as if it already knows the world. That demand is logically incoherent.
Machine Learning and the Myth of Error-Free Output
“To err is a cognitive invariant.”
— Yeralan
Public expectation often treats computational output as oracle. When an AI system makes a mistake, observers infer untrustworthiness. But modern machine learning systems are, in important respects, approximation machines. They generalize by compressing; they predict by interpolating; they err by design because the world is not fully observed and the training distribution is finite.
Two distinctions matter.
Training error vs. generalization error: a model can achieve low training error by memorization and still fail in deployment.
Calibration vs. accuracy: a model may be accurate on average yet systematically overconfident or underconfident in its probabilities.
The “error-free AI” ideal therefore invites the wrong kind of trust: a trust in surface precision rather than in well-characterized limits. In safety-critical contexts, what we want is not perfection but known failure modes and measured uncertainty.
Adversarial fragility
The existence of adversarial examples demonstrates that models can be confident and wrong under tiny perturbations (Goodfellow, Shlens, and Szegedy 2015). This is not a moral flaw. It is a geometrical fact about high-dimensional decision boundaries and training objectives. The remedy is not fantasy perfection, but robustness engineering and humility about epistemic reach.
The Genius Who Fumbles: Local Failure and Global Insight
We often commit a social fallacy: we expect competence to be uniform across domains. Yet cognitive specialization and resource constraints imply trade-offs. A person may have exceptional capacity for abstraction and weak capacity for mundane logistics; a research group may be brilliant at invention and incompetent at documentation; an institution may be excellent at credentialing and poor at truth-seeking.
The point is not to romanticize dysfunction. It is to reject a simplistic inference: that a localized failure invalidates a broader cognitive contribution. Conversely, it is also to reject the inverse romantic myth: that brilliance excuses negligence. The rational position is structural: competence is multidimensional, and its failures are informative about system design.
Normative Error and the Politics of Deviance
Some “errors” are not errors at all; they are violations of convention. A student who challenges a professor may be “wrong” in tone and right in substance. A whistleblower violates protocol and restores truth. A scientist who refuses to cite the fashionable paper may be punished socially while acting epistemically.
Here error language becomes a control technology. To label a deviation as “error” is to place it inside a moral economy: blame, shame, and correction. This is often useful. It is also often abused. Institutions that conflate normative compliance with truth acquisition drift toward what might be called epistemic authoritarianism: the map becomes the enforcement of the map.
Synthesis: Error as Epistemic Gate
We can now state the central claim without metaphor.
A cognitive system capable of revision must be capable of error; a social system capable of truth must be capable of dissent; a control system capable of regulation must be capable of discrepancy.
Popper’s emphasis on falsifiability can be read as an institutionalization of error: a demand that theories expose themselves to refutation (Popper 1959). Kuhn’s account of scientific change emphasizes the role of anomaly: persistent error in prediction becomes the seed of paradigm transition (Kuhn 1962). These are philosophical statements, but they align with the engineering account: discrepancy is the driver of adaptation.
The practical moral is austere. We must distinguish error from deviation, mistake from blunder, noise from anomaly. And we must cultivate systems that do not merely punish error, but interpret it.
Conclusion: Against the Fantasy of Frictionless Cognition
The fantasy of error-free cognition is attractive for the same reason utopias are attractive: it promises comfort. But comfort is not an epistemic virtue. Where uncertainty is real, error is inevitable; where learning is real, error is necessary; where coordination is real, dissent is vital.
This is not an invitation to carelessness. It is an insistence on proper goals. In engineering, reduce error to preserve function. In inquiry, preserve error to preserve discovery. In governance, separate compliance from truth. In AI, demand calibration and transparency rather than oracle theater.
The higher aim is not perfection. It is corrigibility.