@yuiseki_: I’ve always felt that large-scale language models just consistently output plausible things in response to input, but sometimes they output facts. Is it not also true that humans are more likely to see Hallucination?
@nishio: delusion that “AI will tell me the truth” and assume the output is the truth, or when you realize it is not the truth, “AI lied!” and being delusional as if the AI had unethical intentions seems to me to be a human harcination. - human bug
@nishio: list of common human halcyonations 1: Assume that the LLM always outputs the truth 2: Consider that the LLM intended to lie when it outputs something that is not true. 3: Assume that the LLM knows later than the training data. What else is there to say?
nishio example of human halucination “A system trained with the right information will produce the right information.”
noricoco: I commented on this article this morning in Nikkei Think! Many reporters seem to think that “by learning misinformation, chatGPT makes mistakes”, but mechanically, maybe not, but from the output state, chatGPT itself fabricates? I think there is a high possibility that the chatGPT itself is fabricating the information. https://nikkei.com/article/DGXZQOUC100WA0Q3A410C2000000/…
This page is auto-translated from /nishio/人間のハルシネーション using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.