nishio>I guess the reason I’m willing to read the werewolf transcription when it’s a werewolf transcription is that, after all, even though it’s a “class”, it’s just “already verbalized content” being aired like TV, so there’s no need to read the transcription!

tatekawa>

  • I expect this data to have two levels of utilization.
    1. human reads and understands. 2. to be looked back on later
    2. Run the data through a natural language processing algorithm to extract data that may be useful.
    • The reason for the above division into 1 and 2 is, in essence, that I thought that 2 might be usable, but that some editing or other device might be necessary to use it as 1. The reason why I divided the above into 1 and 2 is because I thought that 2 could be used.
  • Would you be inclined to read it again if there was editing or some other device, or would you not be inclined to read it again because the lecture material or whatever would be sufficient?
    • Maybe it’s the latter, I thought.
  • The latter.
    • As for myself, I organize and record “my notes” immediately after the lecture and read them back. I feel there is too much information even in the lecture material.
    • I have always thought that the most you can learn in a two-hour lecture is about three bullet points at most. I have always thought that the most you can learn in a two-hour lecture is about three bullet points.
  • I see. On the other hand, “which three are my learnings” varies from person to person, maybe. - Same content, different learnings.
  • So I put a lot of emphasis on exchanging opinions with other people after lectures.
  • Verbalize → Structure → Explain to others,
    • Explaining to others clarifies what it is that you don’t understand.
    • seems to be the important part
  • I thought it had something to do with this Knowledge Accumulation Model.
  • I think this model can be improved, and I feel that there are situations where placing a box on top results in the creation of a box below!
    • For example, if you go from the bottom, you might go from vectors to linear algebra to machine learning, and as a result of learning machine learning without understanding the translation, you might gain a deeper understanding of linear algebra.
  • Wonderful.
    • Here’s the real model in my mind, but it doesn’t always come across, so I liken it to stacking boxes.
    • The more foundations, the easier it is to stack, but luck may win the upper boxes, in which case the lower boxes will be more efficient
  • When I was a ronin, I raised my math deviation by 40 in one year, and I remember learning about that phenomenon with great awareness. I couldn’t verbalize it at the time, but that’s what it was.
  • When you are learning machine learning without knowing why, are you really learning anything near the top of the pyramid?

This page is auto-translated from /nishio/èȘ­ă‚€æ°—ăŒè”·ăăȘă„æ–‡ç«  using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.