I think this may be the result of the fact that it talks like a person, but we are still consciously aware that it's a machine.
Automation bias leads us to expect it to come back with something that is entirely accurate, as if it came from a reliable spreadsheet formula or pocket calculator.
There is also the failing in the typical interface with LLMs that it won't just come back and say 'I don't know', unless the information requested is specifically outside the timeframe of its training data.
The model is forced to come up with something to satisfy its interlocutor.