The Rational Way(s) of Measuring Irrationality
- Hardik Srivastava
- 6 days ago
- 7 min read
Updated: 11 minutes ago
Writer: Hardik Srivastava
Editor: Krithi Kankanala

Introduction
Are you rational all the time? Do you always take the metro over a cab? Do you always leave a bad movie early because you know you’ve already wasted money and there’s no need for you to waste time? Do you measure your happiness in arbitrary units like utils, or are you indifferent to indifference curves?
Chances are, almost all your life, you’ve been more economically irrational than rational. It’s surprising then, that most of traditional economics still looks at us as a world of rational John Does and Jane Roes who’ve got marginal utilities and marginal costs that can be measured through polynomial functions. These polynomial functions are then put through a series of constrained optimizations (basically finding the maxima/minima of a multivariate function) to ascertain whether a person “prefers” one scarce resource over the other.
But that doesn’t always work. Calculus doesn’t account for cultural choices and equilibrium equations usually can’t deal with emotions. However, scholarly literature in economics is slowly starting to catch on to the obvious lack of accurate representation of human behavior in economic theory. A number of behaviorists now have advanced ways to plug holes that currently exist in economic theory.
The purpose of this blog is to depict how economics is ridding itself of rationality through four prevailing schools of thought. Each has a different way to account for real human behavior by measuring it, analysing it, and finally, explaining it.
Bounded Rationality
Bounded rationality assumes that the average person is only moderately irrational. Decisions made by a rational person are perfect. Decisions made by a real person are “good enough”.
For instance, research on public voting choices has shown that people often rely on simple heuristics/experimentation such as counting which candidate aligns with more of their important issues, or choosing based on emotions like admiration and contempt, rather than calculating the optimal choice using every available piece of information (part of the reason voting systems around the world are flawed). These “good enough” heuristics can match or even outperform complex rational models in predicting how people actually decide who they want to vote for.
The fact that humans are considered just “good enough” makes it easier for the math to work out: bounded rationality math fundamentally only makes one change to the optimal choice equation: adding an error term ε to the ideal case “perfect” choice (which, remember, is calculated using constrained optimization). Mathematically, think of the ideal, optimum choice value to be U* when choice ‘s’ is made.
So, U(s) = U*
read this as “of all happiness maximising possibilities, making the sth makes me the happiest”.
All bounded rationality does, mathematically, is add an error term ε to the RHS which reads as “of all happiness maximising possibilities, making the sth makes me just happy enough”:
Heuristics and Biases
Heuristics are mental shortcuts that allow most of us to run through truth making hard decisions. “I did something similar before, why can’t I do it again, today?”. Rather than perfectly rational calculations, people often use simple rules or intuitive judgments, aka heuristics, to make decisions quickly and with minimal effort.
For example, when choosing between options, people might rely on how easily an example comes to mind (availability heuristic) or how representative an option seems of a known category (representativeness heuristic), rather than evaluating all relevant probabilities and information. While these shortcuts save time and mental resources, they lead to predictable biases like overconfidence, loss aversion where losses are felt more deeply than gains, and present bias where immediate rewards are favored over future gains.
Mathematically, ideal rational decision-making maximizes expected utility
U* = U(s)
However, heuristics cause deviations from this optimal choice, introducing a bias term δ such that:
U(s)=U∗+δ
Where δ captures the systematic error or deviation caused by these mental shortcuts. Unlike bounded rationality’s error term ε which represents “good enough” satisficing that allows satisfaction near the optimum, δ reflects biased distortions in the valuation process.
Thus, heuristics and biases represent consistent patterns of deviation from rational choice due to mental shortcuts and cognitive limits. While heuristics reduce cognitive load and facilitate decisions in complex environments, the resulting biases systematically skew judgments away from optimal rationality, impacting choices like risk assessment, prediction, and value estimation. This framework builds on Kahneman and Tversky’s research, highlighting how heuristics produce errors such as loss aversion, overconfidence, and present bias that shape real-world decisions.
Prospect Theory
Prospect theory argues something that we subconsciously know as humans : we are much more risk averse than reward-greedy. We’d rather not lose five rupees than invest five rupees with an equal likelihood of making money on the investment.
This differential treatment of gains and losses is codified by prospect theory in economic math by ‘loss aversion’. Look at how this shows up in the value function in prospect theory is:
v(x) = {xα when x ≥ 0}
= {-λ(-x)β when x < 0}
α, β are between 0 and 1 here, and λ is greater than 1, representing loss aversion.
V(x) shows up internally in the U* calculation, but it’s a weighted average of the likelihood of x event happening. Now, the actual likelihood of x event happening vs the theoretical probability of x event happening will be different because people can’t look through noise for signals, or don’t want to bet savings on bitcoin or palantir, for example.
We then assume that since people are especially averse to losses, there must be some distortion of possibilities, and this distortion shows up as a π(δ) term in an equation which, by now, you must be really intimate with:
U(s) = U* + π(δ)
And that’s Prospect Theory’s contribution: assume (correctly) that people are loss averse and then adjust for it by adding a distortion term to the ideal value!
Nudges and Choice Architecture
“Do you want to watch a movie at 10pm and then play badminton at midnight” or “Do you want to play badminton at midnight and watch a movie before it?”
Considering you’re free tonight and in the mood for a film, which of these questions do you think would you want to say yes to? If you’re a perfect economic John Doe, both of these are exactly the same questions, and you’ll say yes to both, or no to both. If you’re not, though, chances are the first question will probably encourage you to watch the film more than the latter. This is the nudges and choice architecture: the way you present a question can nudge them in one direction, even though the underlying question is exactly the same.
Mathematically, this is what the utility from nudges is showcased as:
U(s|F)=U(s)+η(F)
Where η(F) is the effect of framing or the nudge.
Essentially, the nudge architecture captures the effect that the syntax of a choice has on the chooser alongside, of course, the semantics of the choice.
Example: The Coffee Shop Subscription
Consider a simple scenario: let's say the Blue Tokai on campus offers you a monthly subscription for ₹4000 that includes 20 coffees, or you can pay ₹250 per coffee individually. The rational choice seems obvious: if you drink more than 17 coffees a month, subscribe; if you drink fewer, pay per cup. Yet research shows that most people who subscribe end up drinking only 12 to 15 coffees monthly, effectively paying more per coffee than if they had bought individually. How do our four theories explain this seemingly irrational behavior?
Bounded rationality would say the decision is "good enough" because people approximate their coffee consumption, add a small error term ε to their utility calculation, and settle for a choice that feels satisfactory rather than optimal. They might think "I usually get coffee pretty often" without precisely counting, leading to U(s) ≥ U* - ε. Heuristics and biases would point to the availability heuristic (remembering those busy mornings when you desperately needed coffee) and present bias (the immediate satisfaction of "having" 20 coffees feels better than the future reality of drinking only 15), introducing a bias term δ that systematically overestimates consumption.
Prospect theory would highlight loss aversion: once you've paid ₹4000, not using all 20 coffees feels like losing money you've already spent, so people frame the subscription as insurance against the "loss" of paying ₹250 repeatedly, adding a probability distortion π(δ) to their utility function. Finally, nudges and choice architecture would note that the subscription is framed as "20 coffees for just ₹4000" rather than "₹200 per coffee if you use all 20, but ₹333 per coffee if you use 12”, and this framing effect η(F) nudges people toward subscribing by making the deal sound more attractive than the mathematical reality suggests.
Conclusion
Collectively, these four approaches demonstrate that economics is not abandoning rationality instead,expanding it to accommodate the full spectrum of human decision making. Where traditional economics saw deviation from optimal choice as random noise or temporary error, behavioral economics recognizes these deviations as systematic, predictable, and deeply rooted in how our minds actually work. Bounded rationality acknowledges our computational limits, heuristics and biases map our mental shortcuts, prospect theory captures our emotional relationship with gains and losses, and choice architecture reveals how context shapes our preferences. Each theory adds a different correction term (ε, δ, π(δ), or η(F)) to the idealized utility function U*, transforming economics from a discipline that prescribes how people should behave to one that describes and predicts how people actually do behave.
The irony is that by making economics less "rational" in the traditional sense, these theories make it more rational as a science. A model that accurately predicts human behavior, even when that behavior seems illogical, is more useful than a model that prescribes perfect logic but fails to match reality.

Comments