image

I’ve heard people who have been exposed to LLM on ChatGPT and elsewhere say that they feel it works well with Scrapbox, and I think the same, but I’ve never been able to verbalize why that is.

  • 1: Many people have an image of writing
    • Blog posts, etc.
  • 2: There are several “parts” in it
    • Instead of seeing a single sentence as a chunk of text, feel that there are multiple parts that could be independent pages
  • 3: Considering the relationship between these parts, the one-dimensional arrangement in 2 seems “different” in the first place.
    • In the diagram, the structure is now “A and B supporting C.”
  • 4: Each part connects with another to create another structure.
    • It’s hard to develop this kind of development with the way one writes.

relevance - Scrapbox and the use of knowledge representation and permalinks in books - > Nishio’s Scrapbox has a small granularity of entries, like a fragmented scrapbook of notes, but it’s easy to absorb knowledge. - give rise to

  • @nishio: Creating knowledge packages is actually easier than creating documents for human readers. Because humans have limited short-term memory, we need to give knowledge in a “good order”, which is very hard for the author, but when giving it to LLM, the order is irrelevant, so just write whatever comes to your mind!


This page is auto-translated from /nishio/ScrapboxとLLMの相性の良さ using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.