Understanding is a hypothesis”, so it must be “Validated by Action”. Conversely, “understanding” before it is verified is “something that could be wrong,” so it can be wrong Even if you read a good textbook that says the right thing, the reader may not understand it and may misinterpret it.

This understanding acquisition process is a “error-tolerant process” and thus can use “error-prone algorithms” such as LLM. In this respect, the criticism that “LLMs sometimes give incorrect answers” is misplaced.


This page is auto-translated from /nishio/理解は仮説なので誤っても良い using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.