Kozaneba

  • image

from Diary 2023-11-01 My current impressions and thoughts on [omni

  • Good ā€œsometimes up coming pageā€. - page sometimes emerging.
    • The form of expression is problematic.
    • Currently, Iā€™m ā€œkeeping it allā€ as an experiment, but itā€™s clearly not good.
    • After all, weā€™re only looking at the most recent generation, so we donā€™t need the one in front of us.
      • If you havenā€™t read the part in the foreground, you should.
      • You might want to look back and read it, but it doesnā€™t have to be all unfolded from the beginning.
    • Frequency gap is too large.
      • And it is not a good idea for that change to be made by a change in title
        • Not that there is anything wrong with that per se, but the problem is that you are treating the page with the changed title as a ā€œnewly added pageā€.
        • Seems a bit cumbersome to solve this problem with Scrapbox.
  • Iterative Commenter
  • About the original data
    • Thereā€™s a big difference between having 60 books worth of my Scrapbox articles and 100 books worth of real book-derived data.
      • Whatā€™s the difference?
      • Books assume a one-dimensional context.
        • It implicitly assumes that the reader has read the previous sentence.
        • So ā€œwhat are you assuming we know of what has appeared up to that pointā€ is not explicitly stated.
      • Scrapbox, on the other hand, makes no assumptions about what the reader is reading in the foreground
        • So relevant content is explicitly made into a link.
          • Ideally, the first thing to do is to put a large one-dimensional item, like a record of an event or lecture material, on the table. Event article is dying.
    • what should be done?
      • One idea: Leverage memo.
        • Given page k-1 and k, make a leverage memo for page k
        • This is a little over 500 tokens of input.
        • Summary is an ambiguous concept
        • Not a good idea to think of leverage memos as ā€œmaking summariesā€.
          • Leverage memo is not a summary extract reusable components.
          • Returning null is often the correct answer.
          • Theyā€™re not trained on that kind of data set.
          • There is no data set for this purpose, do you want to make one?
        • Context-Specific Summaries
          • I want to take out where the context matches.
  • Markdown conversion
    • Simple format conversion
      • This is boring.
    • Stylistic Conversion
      • From fragmentary notes to blog post
    • Conversion to English
    • And determine if it should be targeted as a preliminary step?
  • Relation to search hits
    • image
    • There are smaller units than the current chunk of 500 tokens.
    • Scrapboxā€™s brief article is ā€œitā€. - Evergreen notes should be atomic.
    • The string of 500 tokens cut from the book is not it.
  • Maybe Iā€™ll stop working with Github Action once and for all.

from Diary 2023-11-02 Current impressions and thoughts on OMNI

Iterative Commenter is running on Github Actions.

  • omni-private is working locally.
    • Itā€™s easier to experiment that way.
    • The code is shared with public omni.
  • Iterative Commenter should be moved to private.

  • āœ…Stop working with Github Actions once

public /nishio ā†’ Github

  • Save for now
  • Cleaned up articles are available separately.
  • Wikignome of maintenance should be made in 3.5.

Machine-generated duplicate content subtle

ideal

  • About embed
    • Use cache
    • Does not carry over unused cache
    • Do not target machine-generated content
  • About Update
    • Maintain telomeres
    • Or overwrite without logging.
      • If you want to see the past, use History.
  • search results
    • viewable
    • Shouldnā€™t be shown from the beginning.
      • too much
      • More DIGEST
        • Manpower DIGEST will be data for future FTs.

Lost in the meadow, which way to go?

Joint activities with AI

  • Not much has been done.
  • With omni-private, you just throw a query and the AI generates it and reads it.
    • Just a single request response
    • And theyā€™re doing it in Scrapbox, which is unsuitable.

AI Reading Notebook - Let the AI do the activity of [read a book - What is reading? - Creating a Knowledge Network. - What kind of network is needed? - backlink Should be, maybe. - 2-hop links There should be - Why? - This Scrapbox reminds me of what Iā€™ve forgotten.

  • Landing it in the context of nishio?
    • A system that allows people to find connections between books, and if you let people read the output of 60 books of NISHIO, you can land the books in NISHIOā€™s context as a result.
      • You donā€™t have to make it explicit.
      • Well, if you want to experience this effect, you must first have a reasonable amount of output, so itā€™s not for the average person at all.
        • You shouldnā€™t aim for the general public first. Kozaneba
  • I have always used it when I need a non-written form of expression.
    • Rather ā€œfunctional enoughā€?
      • It would be useful to write notes as sentences.
      • Browser compatibility issues rather than functionality like I want to be able to use it from my iPad.
      • Iā€™m doing the chopping of the text, humans have done it, and Iā€™m thinking that by doing it, it has the effect of ā€œbetter understanding is created by reading while processingā€ like sutras or active reading.
        • but it may be an assumption.
        • If LLM helped you, you might say, ā€œI should have done this sooner!ā€ I might be able to
  • I use it all the time, but itā€™s not the type of tool I use every day.
    • Tools to retrieve when needed
  • Should there be an interconnection with Scrapbox?
    • Scrapboxā€™s ā€œlink by stringā€ and Kozanebaā€™s non-verbal ā€œlink by placementā€ and ā€œlink by lineā€ are complementary
  • Scrapbox side of the view is not very customizable, so another view is needed

Keichobot

  • A system that encourages verbalization by asking humans questions.
  • Focus on bringing it out from scratch.
    • so it is not connected to existing conversations or Scrapbox stock.
    • There is of course the possibility that it would be good to be
  • I put the chat logs in Scrapbox.
    • It was chopped into chunks and entered into the vector index.
    • This had merit.
    • Right now itā€™s chopped at 500 tokens, so the granularity is rough, but this exchange is QA, so it might be good to index it in the form of a QA pair or something.

Kozaneba situation to be

Kozaneba

image

image

image

This is a bit of a leap.

  • Too drawn to the specific idea of ā€œleveraging memos.ā€
  • Overly specific implementation images
  • Given a page in a book, if the LLM can find links to what is related to what in the story up to that point in the book, the LLM can create a network structure while reading a one-dimensional book.

image

image image image image There is a link to Scrapbox, so there should be a backlink

  • Scrapbox does not have the ability to display it so there should be a new view

image

image

image


image image

image

image https://kozaneba.netlify.app/#view=lhq9ixTHLRlLh55wFcT7

  • Refine the concept of summary
  • Create reusable components
    • You can return null.
  • Finding connections between context and a given sentence
    • What is context?
      • Previous page before the page you are currently reading in the book
      • Books Iā€™m reading now and others
        • Which is the context?
      • Ideas and books in Scrapbox

Support at Kozanebaā€™s LLM

  • Where the text is engraved
  • Where to title the group

This page is auto-translated from /nishio/omnić«é–¢ć™ć‚‹ē¾ę™‚ē‚¹ć§ć®ę„Ÿęƒ³ćØę€č€ƒć®ę•“ē† using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iā€™m very happy to spread my thought to non-Japanese readers.