Move to menu Move to submenu Move to content

Seminars

Title: Large Language Models are overconfident and amplify human bias

Date: Friday, Mar 7, 2025, 10:30 ~ 12:00
Speaker: Lorenz Goette (National University of Singapore)
Location: Zoom을 통한 온라인 세미나

Abstract: Large language models (LLMs) are revolutionizing every aspect of society. They are increasingly used in problem-solving tasks to substitute human assessment and reasoning. LLMs are trained on what humans write and thus prone to learn human biases. One of the most widespread human biases is overconfidence. We examine whether LLMs inherit this bias. We automatically construct reasoning problems with known ground truths, and prompt LLMs to assess the confidence in their answers, closely following similar protocols in human experiments. We find that all five LLMs we study are overconfident: they overestimate the probability that their answer is correct between 20% and 60%. Humans have accuracy similar to the more advanced LLMs, but far lower overconfidence. Although humans and LLMs are similarly biased in questions which they are certain they answered correctly, a key difference emerges between them: LLM bias increases sharply relative to humans when they become less sure that their answers are correct. We also show that LLM input has ambiguous effects on human decision making: while it increases accuracy of the decisions, it also increases overconfidence in them.

※ 본 세미나는 VEAEBES(Virtual East Asia Experimental and Behavioral Economics Seminar Series) 주최로 열리는 세미나 입니다.
참여신청은 아래의 링크로 해주시길 바랍니다.
link
이후 이메일로 zoom 링크가 발송됩니다.
Scroll Top