Matthew Clapham
Nov 10, 2023

I think this may be the result of the fact that it talks like a person, but we are still consciously aware that it's a machine.

Automation bias leads us to expect it to come back with something that is entirely accurate, as if it came from a reliable spreadsheet formula or pocket calculator.

There is also the failing in the typical interface with LLMs that it won't just come back and say 'I don't know', unless the information requested is specifically outside the timeframe of its training data.

The model is forced to come up with something to satisfy its interlocutor.

Matthew Clapham
Matthew Clapham

Written by Matthew Clapham

Professional translator by day. Writer of silly and serious stuff by night. Also by day, when I get fed up of tedious translations. Founder of Iberospherical.

No responses yet