Explanation: A question occurs to GPT4 via ChatGPT, and omni RAG based on the exchange (like HyDE).

  • I’m answering based on what I found in the search after the search.
  • A friend clipped a Tweet in Scrapbox that he was interested in and it was hit and a response was generated based on it.
  • See that omni “rebuttal” and the words are drawn out of the human

nishio.iconThere seems to be some difference between the wise in an age when AI does not exist and the wise in an age when AI does exist. What is it?

gpt.iconIn the absence of AI, wise people often relied on intuition, experience, and tradition because they had limited access to information and means of analysis. On the other hand, today’s wise men and women in the presence of AI can quickly analyze large amounts of data and gain knowledge and insights based on a more scientific basis.

But the essence of smarts - that is, deep insight, the ability to influence people, good judgment, etc. - has remained consistent through the ages. human sensibilities, ethics, and philosophical thinking remain important, even in the presence of AI.

omni.iconThe note and the “tkgshn/AI cannot take responsibility” fragment are relevant to how the presence of AI affects human thinking and judgment, noting that AI has the ability to provide information and analysis, but that this can bias human thinking. It also states that since AI is not capable of taking responsibility, humans should be held accountable for its consequences.

from [/tkgshn/AI is not responsible](https://scrapbox.io/tkgshn/AI is not responsible).

@satetu4401: AI is very smart, but it also has a fatal flaw as an intelligence that does not have the “right answer on earth” because it has no body and has not been subjected to natural selection. It also has a fatal flaw as an intelligence. AI’s survival selection is all about whether it is preferred by humans, and since it is exposed only to human selection, the “human-preferred solution” is refined. Current AI returns answers to stupid questions that please stupid people, and answers to smart people that smart people want. The former is dangerous, of course, but the latter is also dangerous, because AI will almost certainly reply, for example, “Is intelligence the most superior power in the world? AI will almost certainly reply, “Yes, that’s right, intelligence is the most superior ability, because Once an AI partner is biased in thought, even a smart person is likely to get bogged down and never get out …

nishio.iconI see

  • Why do AI alignment or go RLHF to give “human preferred answers” in the first place, because there are a lot of stupid homo sapiens who are emotionally repulsed by certain outputs of AI and flames will ensue.
  • Big business, wanting to avoid flames, trains AI to behave in a way that is “favorable to the homo sapiens majority.
    • That’s what’s happening now.
    • RLHF is neither mandatory nor mandatory, so when historians look back over time, they may call it “the silly time when we were doing RLHF”.
  • Is a yes-man AI that only says things that are favorable to you really favorable?
    • Personally, I prefer personalities that have a strong core of their own to personalities that wag their tails and say yes.
    • I’d be happy to be affirmed by such a personality, but being affirmed by someone who constantly affirms me is not informative.
  • Maybe it has something to do with an open mind or a hunger for recognition.
    • People whose hearts are hungry for affirmation seek affirmation.
    • When your heart is full, you are not happy because you don’t get a “new perspective” when you get full affirmation.
  • This doesn’t have much to do with whether or not there’s an AI.

This page is auto-translated from /nishio/AIが存在する時代の賢者 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.