“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
It’s always funny to me when people do add ‘confidence scores’ to LLMs, because it always amounts to just adding ‘say how confident you are with low, medium or high in your response’ to th prompt, and then you have made up confidences for made up replies. And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…
That doesn’t justify flat out making shit up to everyone else, though. If a client is told information is made up but they use it anyway, that’s on the client. Although I’d argue that an LLM shouldn’t be in the business of making shit up unless specifically instructed to do so by the client.