Kozaneba

  • image

from Diary 2023-11-01 My current impressions and thoughts on [omni

  • Good ā€œsometimes up coming pageā€. - page sometimes emerging.
    • The form of expression is problematic.
    • Currently, I’m ā€œkeeping it allā€ as an experiment, but it’s clearly not good.
    • After all, we’re only looking at the most recent generation, so we don’t need the one in front of us.
      • If you haven’t read the part in the foreground, you should.
      • You might want to look back and read it, but it doesn’t have to be all unfolded from the beginning.
    • Frequency gap is too large.
      • And it is not a good idea for that change to be made by a change in title
        • Not that there is anything wrong with that per se, but the problem is that you are treating the page with the changed title as a ā€œnewly added pageā€.
        • Seems a bit cumbersome to solve this problem with Scrapbox.
  • Iterative Commenter
  • About the original data
    • There’s a big difference between having 60 books worth of my Scrapbox articles and 100 books worth of real book-derived data.
      • What’s the difference?
      • Books assume a one-dimensional context.
        • It implicitly assumes that the reader has read the previous sentence.
        • So ā€œwhat are you assuming we know of what has appeared up to that pointā€ is not explicitly stated.
      • Scrapbox, on the other hand, makes no assumptions about what the reader is reading in the foreground
        • So relevant content is explicitly made into a link.
          • Ideally, the first thing to do is to put a large one-dimensional item, like a record of an event or lecture material, on the table. Event article is dying.
    • what should be done?
      • One idea: Leverage memo.
        • Given page k-1 and k, make a leverage memo for page k
        • This is a little over 500 tokens of input.
        • Summary is an ambiguous concept
        • Not a good idea to think of leverage memos as ā€œmaking summariesā€.
          • Leverage memo is not a summary extract reusable components.
          • Returning null is often the correct answer.
          • They’re not trained on that kind of data set.
          • There is no data set for this purpose, do you want to make one?
        • Context-Specific Summaries
          • I want to take out where the context matches.
  • Markdown conversion
    • Simple format conversion
      • This is boring.
    • Stylistic Conversion
      • From fragmentary notes to blog post
    • Conversion to English
    • And determine if it should be targeted as a preliminary step?
  • Relation to search hits
    • image
    • There are smaller units than the current chunk of 500 tokens.
    • Scrapbox’s brief article is ā€œitā€. - Evergreen notes should be atomic.
    • The string of 500 tokens cut from the book is not it.
  • Maybe I’ll stop working with Github Action once and for all.

from Diary 2023-11-02 Current impressions and thoughts on OMNI

Iterative Commenter is running on Github Actions.

  • omni-private is working locally.
    • It’s easier to experiment that way.
    • The code is shared with public omni.
  • Iterative Commenter should be moved to private.

  • āœ…Stop working with Github Actions once

public /nishio → Github

  • Save for now
  • Cleaned up articles are available separately.
  • Wikignome of maintenance should be made in 3.5.

Machine-generated duplicate content subtle

ideal

  • About embed
    • Use cache
    • Does not carry over unused cache
    • Do not target machine-generated content
  • About Update
    • Maintain telomeres
    • Or overwrite without logging.
      • If you want to see the past, use History.
  • search results
    • viewable
    • Shouldn’t be shown from the beginning.
      • too much
      • More DIGEST
        • Manpower DIGEST will be data for future FTs.

Lost in the meadow, which way to go?

Joint activities with AI

  • Not much has been done.
  • With omni-private, you just throw a query and the AI generates it and reads it.
    • Just a single request response
    • And they’re doing it in Scrapbox, which is unsuitable.

AI Reading Notebook - Let the AI do the activity of [read a book - What is reading? - Creating a Knowledge Network. - What kind of network is needed? - backlink Should be, maybe. - 2-hop links There should be - Why? - This Scrapbox reminds me of what I’ve forgotten.

  • Landing it in the context of nishio?
    • A system that allows people to find connections between books, and if you let people read the output of 60 books of NISHIO, you can land the books in NISHIO’s context as a result.
      • You don’t have to make it explicit.
      • Well, if you want to experience this effect, you must first have a reasonable amount of output, so it’s not for the average person at all.
        • You shouldn’t aim for the general public first. Kozaneba
  • I have always used it when I need a non-written form of expression.
    • Rather ā€œfunctional enoughā€?
      • It would be useful to write notes as sentences.
      • Browser compatibility issues rather than functionality like I want to be able to use it from my iPad.
      • I’m doing the chopping of the text, humans have done it, and I’m thinking that by doing it, it has the effect of ā€œbetter understanding is created by reading while processingā€ like sutras or active reading.
        • but it may be an assumption.
        • If LLM helped you, you might say, ā€œI should have done this sooner!ā€ I might be able to
  • I use it all the time, but it’s not the type of tool I use every day.
    • Tools to retrieve when needed
  • Should there be an interconnection with Scrapbox?
    • Scrapbox’s ā€œlink by stringā€ and Kozaneba’s non-verbal ā€œlink by placementā€ and ā€œlink by lineā€ are complementary
  • Scrapbox side of the view is not very customizable, so another view is needed

Keichobot

  • A system that encourages verbalization by asking humans questions.
  • Focus on bringing it out from scratch.
    • so it is not connected to existing conversations or Scrapbox stock.
    • There is of course the possibility that it would be good to be
  • I put the chat logs in Scrapbox.
    • It was chopped into chunks and entered into the vector index.
    • This had merit.
    • Right now it’s chopped at 500 tokens, so the granularity is rough, but this exchange is QA, so it might be good to index it in the form of a QA pair or something.

Kozaneba situation to be

Kozaneba

image

image

image

This is a bit of a leap.

  • Too drawn to the specific idea of ā€œleveraging memos.ā€
  • Overly specific implementation images
  • Given a page in a book, if the LLM can find links to what is related to what in the story up to that point in the book, the LLM can create a network structure while reading a one-dimensional book.

image

image image image image There is a link to Scrapbox, so there should be a backlink

  • Scrapbox does not have the ability to display it so there should be a new view

image

image

image


image image

image

image https://kozaneba.netlify.app/#view=lhq9ixTHLRlLh55wFcT7

  • Refine the concept of summary
  • Create reusable components
    • You can return null.
  • Finding connections between context and a given sentence
    • What is context?
      • Previous page before the page you are currently reading in the book
      • Books I’m reading now and others
        • Which is the context?
      • Ideas and books in Scrapbox

Support at Kozaneba’s LLM

  • Where the text is engraved
  • Where to title the group

This page is auto-translated from /nishio/omnić«é–¢ć™ć‚‹ē¾ę™‚ē‚¹ć§ć®ę„Ÿęƒ³ćØę€č€ƒć®ę•“ē† using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.