- Interesting discussion, but comparing coaching chatbots to human coaching seems misguided. Chatbots are “stationery for output alone without a flesh-and-blood person,” and the comparison is paper and pen. When I think, “Let’s write and think,” I’d rather write from scratch or answer a question
-
However, questions that encourage reflection in coaching are generally “hard to think about” and are often not deep enough to be asked by a bot. It is only when someone is there to put “pressure” on them that they are finally squeezed out.
-
- The consideration that there is “pressure” on the human coach and that it is necessary for it to function effectively is interesting and not particularly objectionable. On the other hand, whether it is good for stationery to have pressure is another story, and being different from humans creates new uses, such as using it while bathing, or when you wake up late at night with an idea
- I agree 100% with the idea that new insights come from lull (e.g. in the rain), and that it is important to answer questions that cannot be answered immediately, but I would like to see a policy of taking time to answer such questions, rather than trying to control users with “pressure” as to how to achieve that. Policy to make users understand that there is more fruit for them if they answer the questions.
- So we are back to the first point. Chatbots are stationery, not humans, so it is natural to teach users how to use chatbots in a way that is likely to create value, and then have users agree to use chatbots in that way. It is not a good idea for stationery to try to control the user’s usage.
This page is auto-translated from /nishio/チャットボットは文房具 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.