Reliance on mental short cuts leaves us open to economic exploitation. Can technology help?
Yes, but it could hinder us too.
In the US, the holiday known as Black Friday, an orgy of consumerism, follows Thanksgiving, which is probably the least commercial holiday of the year. As sure as night follows day, family togetherness gives way to a kind of market purge. The typical Black Friday purchase is the flat-screen TV, and one can hardly avoid ads that look like this: “4K ultra hdtv deals! $1,299.99 – now $649.99!”
This marketing technique exploits a psychological theory called “anchoring”, effectively appealing to what some scientists think is a weakness of the human brain in which we are unable to make rational assessments of value when we anchor our judgements around a given number. In the game of prices, which can be thought of as a kind of intellectual duel between companies and consumers, this almost seems like a form of cheating. After all, when I see that struck-through high number, I don’t know offhand if that really was the price of the television yesterday. But the deal will only stay good for a few days so I have a limited time to decide. Instead, I have to rely on what is called a “heuristic” to make a decision, balancing my sense of the importance of the purchase – how much I need that TV – against the sense I have of the value of the commodity. So, I find myself deciding on the basis of what information is given to me in the ad: the anchor of the original price.
The idea of the anchor as a cognitive bias comes from a body of research called behavioural science. This research has been widely adopted by behavioural economics, which applies insights from psychology to markets, finance and business, especially management. This area of research was created by psychologists Daniel Kahneman and Amos Tversky in the 1980s (Kahneman died in March at the age of 90, having transformed the social sciences). He and Tversky discovered that humans were systematically unable to quickly make optimal decisions using evidence, even if they had advanced training in the statistical theories that define what “optimal” means. They found that instead of doing the difficult maths required, their experimental subjects replaced problems with simpler questions that Kahneman named a heuristic: a cognitive short cut that they theorised was necessary because the brain was unable to process information quickly and cleanly enough. In other words, rather than weighing the evidence, we jump to conclusions, pairing things that are superficially alike and relying on the evidence we have access to most immediately: adjusting around an anchor.

Behavioural economist Dan Ariely – now under investigation for potential fraud – points to a use of anchoring by The Economist, which once listed subscriptions at the following prices: digital only, $59; print only, $125; and print plus digital, $125. The middle option, print only, is a decoy price there to underline the alleged value of the bundle, which is more than twice as expensive as the digital-only option. Ariely points out that with two prices, you have to make your own comparison, whereas the comparison of the two is supplied to you by the middle option, which anchors you to a number.
What is happening here is a version of Kahneman and Tversky’s original experiment, in which they gave two versions of the same maths problem to two different groups: 8 X 3 X 7 X 3 X 6 X 3 X 5 X 3 X 4 X 3 X 3 X 3 X 2 X 3 X 1 and 1 X 3 X 2 X 3 X 3 X 3 X 4 X 3 X 5 X 3 X 6 X 3 X 7 X 3 X 8. The groups gave back median guesses of 2,250 and 512, respectively. The real answer is 40,320. Without enough time to do the taxing amount of mental arithmetic, the groups leaned on the lead number, which anchored them. If people do this consistently in experiments, how does it influence their decisions in real-life scenarios? The barrage of digital advertisements, app reminders and “nudges” – to use behavioural economists Richard Thaler and Cass Sunstein’s term – that we face every day is effectively a form of competition to set baselines for us.
It is hard to overstate the influence of this heuristics and biases theory, not just on the subsequent development of science – Kahneman and Tversky’s original paper has amassed nearly 50,000 citations on Google Scholar – but on our digitally mediated lives today. It’s not just Black Friday; we are inundated with marketers and managers trying to exploit our biases, to get us to heuristically jump to their desired conclusions. There is nothing new about advertisers appealing to our deepest desires or even attempting to “manufacture” them. But anchoring is both widespread in use and scientifically murky. I have written in the past that behavioural economics is shaky as science but it also poses some interesting philosophical questions about how we go about our daily digital lives. One of those questions is how we decide about value and what counts as a good decision.
AI might be transforming this game, since it allows nearly infinite experimental testing in real time. For example, the headline that you see over a digital article is tested using algorithms that “learn” which version of the wording is more clickable. In cases such as this, algorithms search for baselines that work – effective anchors – without the need for actual advertisers or writers to craft them. The psychological theory, combined with the advance of digital technologies, might soon lead to the automation of a lot of marketing. As the science-fiction author Ted Chiang recently put it, “Will AI become the new McKinsey?” A core method of the consulting firm is large “strategic” layoffs.
But what happens when AI provides the strategy? The hard, human work of exploiting our cognitive biases gets automated, creating a weird situation in which a psychology that was allegedly devoted to questions of human rationality is executed without any human decision-making to exploit the least rational aspects of our minds. Economists usually think of prices as expressing preferences – when I buy a cup of coffee, for example, I am asserting my preference for that particular product, as opposed to any other, at that particular price, which I am willing to pay. Decisions of this sort may be more or less “rational” (I should not pay $8 for a single cup of coffee, something that is becoming harder and harder to avoid in New York, where I live) but they are mine. When AI exploits anchoring, it looks less like I am making a rational choice from among options and more like an industrial amount of data-mining is being plugged directly into my unconscious. This outcome was predictable, if not rational.
German psychologist Gerd Gigerenzer has spent his career arguing against heuristics and biases as a framework. He thinks humans adapt well to their cognitive circumstances and can remain rational even in the face of algorithms designed to win the game. He even points out that terms such as anchoring hide more than they reveal – he calls terms of this kind “surrogates for theories” – creating an obstacle to science that too easily becomes a way of exploiting human minds without illuminating us in the least. Indeed, many a marketer may ask: supposing this anchoring thing is scientifically valid, how does one know which anchor will work? As the number of heuristics and biases has proliferated – lists often include dozens, and hundreds have been proposed – the absence of a theory becomes a pragmatic problem. If I am in a restaurant and the prices seem high, do I anchor to them or do I walk out?
Questions such as this multiply as soon as one thinks about anchoring philosophically: are we collectively anchored to 2 per cent inflation? Am I anchored to my parents’ sense of what an apr on a mortgage should be? If we think of all quantitative expectations as a set of anchors that do not properly evaluate evidence, the paltriness of the original theory becomes clear. After all, what would it mean for me to be unable to anchor around a baseline? If I could not do that, I would spend most of my brainpower calculating values anew at each moment. Every price would be abstract, every decision prohibitively costly in time and energy.
If AI and other algorithms take over that function, it unburdens us – but at a different cost. When prices do not communicate human values but are just the results of data processing with the goal of bringing in revenue, then it is not individual humans who are irrational but all of society. What seemed like a virtuous science aiming to illuminate us about our behaviours has become a monstrous obstacle to a good society. Let’s call it the short-cut society, held up by Big Data and bad philosophy pretending to be science. We have to decide if we want our society to be run on the ceaseless consumption of eternally discounted products. If not, we’ll need to impose some actual rationality on the digital economy that feeds off our acquisitive tendencies. Or, in other words, we’ll need to know if $1,299.99 was ever the real price.
About the writer:
Leif Weatherby is the director of the Digital Theory Lab and an associate professor of German at New York University. His next book, out in 2025, will be Language Machines: Cultural AI and the End of Remainder Humanism.