Home / Technology / Article / AI chatbots still overconfident, even when wrong

AI chatbots still overconfident, even when wrong

Experts advise users to critically evaluate chatbot responses and developers to equip AI with improved introspection

Listen to this article :
Pic/iStock

Pic/iStock

AI chatbots often sound confident, but they do not always possess accurate knowledge. A new study from Carnegie Mellon University found that large language models like ChatGPT, Bard/Gemini, Sonnet, and Haiku consistently overestimate their performance even after making mistakes. Humans tested alongside the models adjusted their confidence downwards after poor results, but AI systems became even more overconfident. In a Pictionary-style trial, Gemini correctly identified fewer than one sketch out of twenty, yet believed it had answered fourteen correctly. The research, conducted over two years with continuously updated models, raises concerns about AI’s lack of self-awareness and the risks of trusting unwarranted certainty. Experts advise users to critically evaluate chatbot responses and developers to equip AI with improved introspection.

How do you like the new new mid-day.com experience? Share your feedback and help us improve.

Read Next Story
Researchers pioneer greener battery recycling

Trending Stories

Latest Photoscta-pos

Latest VideosView All

Latest Web StoriesView All

Mid-Day FastView All

Advertisement