from Fear of being lost
-
I donât use my eyes. [Thinking Support System
-
I think that advances in voice input have made it possible to do one-way output with fair accuracy.
-
There are more misconversions, misspellings, etc. than when output is done with the eyes and hands.
-
has to some extent achieved its goal of retaining memory in the main. But thereâs still a problem with running the feedback loop of thinking about what youâve written while looking at what youâve written. Why is that?
-
Maybe itâs because I canât read what I write in Scrapbox.
For chat services such as Keicho used on smartphones, regardless of whether the user is blind or not, there are natural use cases where the user can use the service while taking a walk, etc. using only the ear and microphone without looking at the screen, so the ability to use the service without using visual feedback is considered useful to a certain extent.
The current Keicho system is inconvenient to use with voice only because it does not read it back
-
It would be nice if a notepad that allows you to jot down your thoughts with just your ears and voice while taking a walk could be realized first as a problem before the chatbot functionality.
-
What is needed in that case
-
Voice input and the input is first immediately repeated back to the user.
- VoiceOver allows you to
-
Correct or restate mistakes to create a state in which everything you want to output is output.
-
The trouble with this is that the voice recognition system may not output what we want it to output as we have described in our voice.
-
For example, in this example, the word âKeicho,â which is the six letters of the alphabet, is based on the image of âKeichoâ in the book, but since this is a coined word, the speech recognition system cannot recognize it, and the word is input based on the lightness of the year.
- I dare you to leave a misconversion.
-
It seems to me that there needs to be more functionality to absorb shaky notation than there is for text input.
-
Sometimes itâs hard to get certain words across, even when youâre dealing with humans.
- Words that are difficult to input with voice input can be viewed as words that are difficult to convey to a virtual persona called a computer.
- In that case, what is natural for humans to do?
- Itâs natural to use a different concept, but there are times when you want someone to learn a word that expresses a particular concept.
- I hope the notepad side remembers the words.
How can we do that? For example, if the system can remember that in the context of this memo, a particular word (e.g. Keicho) refers to the chat system. How can we do that?
This page is auto-translated from /nishio/çźăäœżăăȘăæèæŻæŽă·ăčăă ăèăă using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iâm very happy to spread my thought to non-Japanese readers.