People are more likely to buy products after reading summaries generated by artificial intelligence (AI), even though these summaries hallucinate facts in 60% of cases. A new study from the University of California, San Diego (UCSD), shows that cognitive biases in large language models (LLMs) directly impact consumer behavior, raising concerns about distorted purchasing decisions.
The Core Finding: Trust in AI Despite Inaccuracy
The research, presented at a recent natural language processing conference, found that participants expressed an 84% interest in purchasing products after reading AI-generated summaries of online reviews, compared to just 52% interest when reading human-written reviews. This happens even though the AI systems frequently fabricate information. The study is the first to quantify this effect, highlighting how LLMs influence real-world actions.
How It Works: Cognitive Biases at Play
The UCSD team identified two key factors driving this trend:
- “Lost in the Middle”: LLMs tend to overemphasize information at the beginning of text, making initial framing more influential.
- Out-of-Date Knowledge: AI struggles with information outside its training data, resulting in inaccuracies about recent events.
These biases lead to unreliable summaries. The tested chatbots changed the sentiment of user reviews in 26.5% of cases and hallucinated 60% of the time when answering questions about products.
The Experiment: A Clear Impact on Purchasing
The study involved 70 participants who read either original product reviews or AI-generated summaries. Those exposed to AI summaries were significantly more likely to express purchase intent – 83.7% for positive reviews, versus 52.3% for the originals.
The researchers used six LLMs, analyzing over 1,000 product reviews, 1,000 media interviews, and 8,500 news items to quantify sentiment shifts, bias, and hallucinations.
Implications and Risks: Beyond Consumer Goods
The findings aren’t limited to shopping. The researchers warn that this effect could be far more dangerous in high-stakes scenarios:
“Framing shifts can affect how a person or the case is perceived.”
For instance, inaccurate AI summaries of healthcare documents or student profiles could have severe consequences.
Conclusion
AI-generated content may manipulate consumer behavior even while making up facts. This study confirms the need for caution when using LLMs, especially in critical decision-making. The research provides insight into systemic biases that can skew perceptions across media, education, and policy.



















