1: Current Topics: Azure Cognitive Search

  • 9/18 Azure Cognitive Search: Outperforming vector search with hybrid retrieval and ranking capabilities
    • imagesrc
    • Talk about combining both and reranking (≈reordering after looking at the search results) is far better than the common “keyword only” search, or vector search.
  • It’s been known for a while that “it’s better to combine and rerank.”
  • Vector search is good for ambiguous searches in natural language conversations because of the flurry of hits, but it has a high cognitive load for users because “the name of the company you visited in sales” or “the product model number” also hits “different things that look very similar”.
    • I was thinking that the groupware needs to improve this problem if it is going to use vector search.
    • Azure Cognitive Search
      • 7/18 Public Preview begins
      • I’m thinking of trying it soon.
    • If you look at the configuration, BM25 + HNSW + rerank is a royal configuration.
  • What is “search?”
    • What is the value you are creating for your customers?
    • The “search UI” we provide to users now is not the best way to deliver that customer value.
    • Relanker ≈ “a midget who looks at 200 search results and sorts them in good order.”
    • Fusion in Decoder ≈ “a little guy who reads 100 search results and writes sentences”
    • The era in which these things are becoming available
    • Especially in the reranker section, a style that takes into account things other than “text explicitly entered by the user,” such as operations on the groupware immediately before, may lead to differentiation.

2: Vector Search - Vector Search in Nishio - Vector search for this Scrapbox project, published 2023-06-05

  • For me, the usefulness of vector search is empirically evident, but perhaps I should present an example for those who have not yet experienced vector search
    • Cases where vector searches were useful

    • Case 1

      • It’s all about “social security spending.”
      • I recall a past lecture in which he said, “I’ve talked about the high cost of Social Security.”
        • The focus was on the ratio of investment to science and technology.
      • I’d try to find it, but I can’t find it by searching for “social security payments”.
        • Vector Search in Nishio for “Social Security Funding Scientific Research” hit
        • image
        • The expression was “38 trillion for medical care, 24 trillion for welfare and others, and 120 trillion in total.”
          • There’s also “Science and Technology” a little further down the road.
      • I made an excerpt page with “Social Security Expenses” in the title. - Comparison of Social Security Expenditures and Science and Technology Expenditures
    • Case 2

      • I’m trying to come up with a story about a hole in the wall, and I’m searching for wall or hole, but I can’t find it.
      • Vector search for “hole in the wall, you have to get close to see it.”
      • Second candidate found.
      • image
        • [The fact that there is a wall ahead is not a reason not to proceed. https://scrapbox.io/nishio/%E9%80%B2%E3%82%80%E5%85%88%E3%81%AB%E5%A3%81%E3%81%8C%E3%81%82%E3%82%8B%E3%81% 93%E3%81%A8%E3%81%AF%E9%80%B2%E3%81%BE%E3%81%AA%E3%81%84%E7%90%86%E7%94%B1%E3%81%AB%E3%81%AF%E3%81%AA%E3%82%89%E3%81%AA%E3%81%84]
          • It’s a match between “you can’t see it unless you get closer” and “you could see it if you got closer.”
        • The nuance of not being able to see the hole in the wall unless you get close to the wall.
      • The hole in the wall was pictured, but not texted.
        • I’ll add a note.
      • human.iconQ: The search is not looking at a picture, is it? (Confirmation)
        • nishio.iconA: I haven’t seen it.
    • Case 3

      • Vector search on “unfeasible ideas seem original”.
      • The one I was looking for was the fifth hit.
        • image
        • A layman’s idea may appear original in an oral presentation, but it is not feasible.

        • This paper compares amateurs who have never used Mindstorms with robot contest winners and runners-up (hereafter referred to as “experts”) in the task “Build a robot that makes creative progress with Lego Mindstorms
“
 There was no difference in originality between the genin and the amateurs
 However, most of the amateurs failed to realize their ideas.

        • Didn’t hit on the swing between “original” and “originality,” between “unfeasible” and “unfeasible.”
        • When we try to specify the search target with search keywords, we tend to use noun form keywords such as “unfeasible,” but in reality, they are sometimes described by opening them to verb form.
      • There is a lot of very relevant content in the candidates above this one: result
        • The pages of this experiment are episodic, which would make them memorable and easily recalled.
      • I created a new page with “[Unfeasible ideas seem original.
    • Case 4

      • Vector search on “kintone talks around data”
      • No.1:.
        • I think the ideal situation would be to have the same collaborative editing capability for database schema and customization JavaScript.

      • No. 4: The
        • Reforming Municipal Operations with Digital Solutions for Public Utilities

          • Both municipalities used Cybozu’s “kintone” as a digital solution. kintone allows users to build business applications without programming knowledge, as long as they have a working knowledge of Excel. It allows you to integrate email, Excel, and disparate information and create a unique business environment with your own ingenuity.

          • I see - I was wondering how to explain kintone, but you explain it as “email and Excel integration”.

          • I like the term “email and Excel integration.”
    • Case 5

      • “Bias to measure only what is easy to measure.”
        • While cost is easy to measure numerically, quality is harder to measure and therefore easier to underestimate.

    • human.iconQ: If I do a vector search, will everyone get better at searching?

      • nishio.iconA: Better to search by keywords if the keywords are known, even if the vector search has good features, it is not possible to “read the mind”, so the searcher’s linguistic ability is necessary
    • human.iconThis kind of story feels magical when you hear only (or more) examples of it going well, but you have to be careful (and self-conscious) because it’s often just as (or more overwhelmingly) the case that it actually didn’t go well.

    • Looking Back

      • It’s now possible to search by “this is what I meant” instead of “I must have written this ‘string’.”
      • Nishio himself is adapting to the vector search system.
        • Early cases search by fragments, as in “social security funding scientific research” and “hole in the wall, you can’t see it unless you get up close.”
        • As I get used to it, I’m searching for “short sentences that express meaning” like “unfeasible ideas seem original,” “kintone is a conversation around data,” and “bias to measure only what is easy to measure.”
      • The other day, when I asked a user to use this vector search while sharing the screen on Zoom, the user typed, “Tell me about your experience of cooking for yourself,” and I thought, “Oh, you don’t understand how to use this at all.
        • I don’t know the difference between a vector search and an LLM that is Instruction Tuning.
        • Maybe most people in the world don’t know the difference.
        • Only those that have undergone additional learning (Instruction Tuning) to “regard user input as instructions and follow those instructions” will follow the instructions in the underlying language model.
        • Similar input just because ChatGPT does.
        • human.iconI see
it is different from ChatGPT

      • The user then typed in “description about cooking” and found my terrible cooking story, which is a good thing, right?
        • Not much vector search goodness.
          • Because the “meaning” in the query is in effect one word of “cooking”, so it is no different from a keyword search.
        • Now when I search for “I made a terrible dish” lots of terrible past disheswww
  • human.iconQ: Does vector search eliminate the need to add search keywords to absorb distortions and make it easier to get a hit?
    • nishio.iconA: That would be unnecessary, but if you only use vector search, you’ll end up needing a combined approach with keyword search like Azure Cognitive Search, because keyword matching is a poor search!
    • human.iconQ: When combined with a keyword search, are the search queries different from those for a vector search?
    • human.iconQ: I’d like to see you handle polysemy as well (is it too hard with the current vector search?).
      • nishio.iconA: I think that the idea of “polysemy” itself is stuck in the “old image of searching by word”. Nowadays, it is possible to create an embedding vector from an entire sentence, so you can write a sentence so that the meaning of the sentence is unique.
  • human.iconQ: Is there both an effect of finding what you want to find but can’t find with keywords alone, and a serendipitous, unexpected encounter with information?

3: Omoikane Embed - Omoikane Project 6/4~

  • from [/omoikane/Omoikane Embed](https://scrapbox.io/omoikane/Omoikane Embed)
    • Mechanism for creating vector index for [Omoikane Vector Search
      • Github Action at 6:00 a.m. Japan time
      • Automatically export JSON from Scrapbox
      • Vector embedding by inscribing on 500 tokens
      • Upload to [Qdrant
    • Began operation on 6/9, now running daily.
    • I started writing reports in Scrapbox on 7/29.
    • 8/9 Organized code to make it easier to put into other projects.
      • People actually showed up to try it.
      • I put this /nishio in this [/nishio
        • This is omoikane-embed-nishio, or omni for short.
      • It’s been two months since we’ve made a lot of improvements to this omni.

4: Intelligent collaboration between AI and humans

  • 8/16 Intelligent collaboration between AI and humans
  • 8/21 No human has to pull the trigger.
    • AI’s once-a-day note-with-comments communication style has the advantage of “no human triggers to pull”.
      • Even when humans are too busy to write comments, AI goes ahead on its own.
      • Even if you write some comments, there is no “send” action.
        • I may come up with something after that and add more.
        • No need for human decision-making on when is “the end” or “completion”.
      • Not sure if it would be better without the “send” action.
        • Personally, I think it’s better to have it, and can be accomplished by running scripts in the development environment when you’re in front of a PC.
        • I’m on the train to get to the camp site, and I feel like triggering from my phone in these cases.
  • 8/21 I want to fork a page.
    • →8/21 multi-head / page memory
    • image from page memory
    • AI created a style of adding to the same page instead of creating a new page.
      • After that, the style of creating new pages was DISABLED (8/31 main branch stopped).
    • This allowed “multiple topics” to develop in parallel
      • Write a note that a human thinks “I’d like to develop this topic” as a seed and treat it as an AI note (with đŸ€–đŸ” in the title in the implementation at the time).
      • → Multi-head thinking
  • Talk if you can afford it (I can’t afford it)

5: Extending the Red Link with AI

  • 8/30 Extending the Red Link with AI
  • image from Extending the Red Link with AI - AI writes research notes daily Useful use cases for the system that were not originally intended.
  • History - If you create a red link (an empty link) and then specify the red link, the page is now created using the results of a vector search on the link title.
    • This is useful
      • What kind of experience does it bring?
        • Make a link to “I think I’ve written something like this before.”
        • Sometimes assisted here by Scrapbox’s link suggestions
          • For those not familiar with Scrapbox: Scrapbox runs an ambiguous search when creating a link and suggests a link destination.
          • If not, you had to use search and other methods to find them.
          • We can leave that to AI.
    • Further Development
      • When there is a good “long phrase” in the AI’s generation or in the past descriptions that the AI has uncovered, make it a link, like a marker.
      • Naturally, it would be a red link.
  • Specific example 1 - 🌀Exchange Form D, omni “reaffirmed the importance of exchange in the problem-solving process.”
    • I didn’t know what “the importance of exchange in the problem-solving process” was, so I red-linked it and stretched it.
    • When OMNI said, “Problem solving bridges the gap between the ideal and the current situation, and the exchange of information in the process is important,” I understood, “I see, you mean exchange of information is exchange.”
    • I wrote about this realization in [Information exchange is exchange
      • (The meaning of this realization is probably not conveyed, but that is a case study for later, so I won’t explain it here.)
    • After a while, “[Is the exchange style of knowledge exchange A?” The question arose
    • I extended the “Public as the object of the gift” that was created there as a red link, and it gave me a lot of examples.
  • It can be called “search with description”.
    • Search with explanation of why it was shortlisted.
    • Instead of a human directly reading the search results, the AI reads them first and writes a description.
    • 9/2 Difference in skin feel between vector search and RAG
      • kazunori_279 I guess instead of messing around with fine tuning and RAG, we can simply do a vector search with emb and display the results. I thought, “Why not? I thought so.

      • nishio I have always been of the “just do a vector search” school of thought, but I feel that RAG is better when used as an intellectual production assistant rather than just a question and answer tool. I feel that RAG is better for use as an assistant for intellectual production, not just for answering questions. The generation part functions as “summarizing the search results according to the purpose”. This assumes that the purpose is given separately from the query.

      • For a long time, I used “Nishio’s Vector Search” and thought, “Isn’t this just fine?” but now that “Extension of red link by AI” is available, I think “this one is better”.
    • I find it different and more useful than a simple vector search.
  • It could be called “decoupling search and action.”
    • Now “searches” don’t involve updating, so if I don’t update my Scrapbox with the search results, the search never happened!
      • Implicitly required that “humans read the results and act on them before they forget they searched.”
      • Search and action were coupled.
    • AI extends the red link” is a human writes the “intention of search” and the AI does the “search, reads it, and writes a beat of explanation”, so the human action after the search is optional.
  • 9/4 Pioneer Mode
    • Pioneer Mode is a development of “Extending the Red Link with AI

    • If you put a link on the â€œâœïžđŸ€–â€ page, the AI will check it periodically and generate it automatically.

    • I can now use it from my phone.
  • human.iconQ: Is it possible to distinguish between the page that AI extended the red link and the page that I created? If not, do you feel comfortable mixing them?
    • nishio.iconA: I didn’t. I did at first, but I stopped. - “Which parts are AI and which parts are human?” The question is “Which part is AI and which part is human? - > When two people get along well, have a lively discussion, and their comments influence each other, it is difficult to discern which part of the resulting material originates from one person. - > The same thing happens with humans and AI. The idea of trying to identify this, or to make it identifiable, can be detrimental to designing a UI. - And I was writing my own.
      • Especially on Scrapbox, the action of “read a page, and if the thought inspires you, write on that page” is afforded.
        • Then, when a human sees an “AI-generated page” and makes a post, it seems strange that the human’s post is not picked up because it was originally an AI-generated page, so the distinction between AI-generated and non-AI-generated pages is abandoned.
      • For example, it is possible to know who updated each line, so if you create a dedicated account for the AI, it is possible to “ignore AI-generated content and use only human-written content”.
        • As to why I haven’t done it, I don’t feel the need to do it

        • I feel like I might discover something new if I try it, but I have a lot of other things I want to try, so it’s not a priority and hasn’t been started.
      • On the other hand, I also explicitly think it’s a bad idea to write the vector search results directly into Scrapbox as AI output.
        • It’s OK up to the point where a human reads it, but once it’s in the search results again, there’s a feeling that the value of the “title to content mapping” is spoiled.
      • from Why I stopped putting đŸ€– in the title of AI generated pages.
        • omni.iconWe discussed removing the đŸ€– mark from the title of AI-generated pages. Initially, this was to prevent AI from reading AI-generated material, but during operation, it was felt that it was natural for humans to respond to AI-written pages. However, we questioned the fact that the content was not subject to being read by the AI. We also discussed the lack of assumption of a collaborative editing forum.
        • That was a good summary.
        • This one in the continuation is indeed a problem.
        • omni.iconThis note relates to a fragment of Nishio’s research note, “It’s buried at the bottom of the AI page.” The idea that it is natural for humans to respond to AI-generated pages and the problem of human thoughts being buried at the bottom of AI-generated pages, closely related in terms of information transfer between AI and humans.
        • This is how we’re dealing with that issue right now.

6: Raw ChatGPT and omni use cases are different.

  • 9/6 I realized I was unconsciously using ChatGPT and omni differently.
  • So there is a difference in the utility that the two provide, what is the difference?
  • from Raw ChatGPT and omni use cases are different.
    • Since most texts in the world are written in “expressions that many people can read and understand”, whereas my research notes are written in “expressions that I can understand”, an AI that RAGs with the latter accelerates my personal thinking much more efficiently than ChatGPT
    • In my research notes I don’t write explanations for words I know, so the AI that reads them doesn’t write explanations for what I know either. Concepts are tools for the economy of thought. So it is more efficient to use them without explanation in one’s thinking.
    • It is useful to use ChatGPT when explaining it to others
  • image iceberg model
  • Case 1
    • omni.iconTo improve knowledge productivity, we need to understand the importance of collaboration and the importance of sharing tacit knowledge prior to planning.”
      • I can tell that this “collaboration” is “the story of Ikujiro Nonaka’s SECI Model”. This can be read to compensate for a little poor Japanese.
    • nishio.iconPlease put in terms that the average person can understand, “In order to improve knowledge productivity, we need to understand the importance of collaboration and the importance of sharing tacit knowledge prior to planning.”
    • gpt.iconTo use knowledge effectively to get things done, it’s important for everyone to work together, to share information, and for everyone to share their own experiences and tips before the actual work begins.”
      • Most people will find it easier to understand.
      • From my subjective point of view, it is clearly degrading. Maybe GPT4 doesn’t understand the cooporation correctly.
    • PS: I’d like to explain this a little more.
      • In Ikujiro Nonaka’s SECI model, “collaboration” refers to the sharing of “tacit knowledge” that is difficult to verbalize through “sharing the same experience,” such as through joint work, without the use of language.
        • So I get the feeling that you “don’t get it at all” when you describe it as “sharing information, experience, and tips”.
      • In light of this, this statement can be interpreted as “before making a linguistic plan, it is necessary to first share a nonverbal experience through cooporation.
      • And in the context in which this statement came up, we were talking about increasing intellectual productivity through AI.
      • Before any linguistic planning can be done on how to improve intellectual productivity with AI, it must first be collaborative.
      • This is the first step in the process. One form of this is to let AI live in the place where I usually do my intellectual production by creating a system that allows AI to work on its own, instead of humans giving it instructions every time and AI responding to them.
  • Example 2 (“Information exchange is an exchange” as you just said)
    • I found “information exchange is exchange” to be an important realization, but I think most people probably don’t know what’s so interesting about it.
    • When I see the word “exchange,” I connect it to the context of “Yukito Emiya’s interchange format.
    • Very rough description.
      • Exchange Form A: Giving a Gift in Return
      • Exchange Form B: Obedience and Protection
      • Exchange Form C: Pay the price and get the goods
    • Information exchange is an exchange” is a realization that “since information is also a good, information exchange can be considered within the framework of Gyoto Emorya’s exchange theory.
  • Supplemental (if time permits)
    • Technically, I think that by loading a lot of the prompts with RAGs, the “chat-ness” is gone and it’s closer to a “summary task,” and the pressure by RLHF to make the expression more palatable to the general public is down.
  • human.iconI feel that the style of writing is a major factor in the speed of comprehension, and I have the impression that a sentence that is easier for me to read is generated when I give my own sentence to GPT to generate, rather than just having GPT generate the sentence for me.
    • nishio.iconIndeed, it may be that OMNI is easier for me to read because it speaks in my style. In the end, it may be that AI assistants are more productive for each individual if they are personalized to the individual.

7: Thoughts on using a private OMNI

  • 9/20 I started to create Private omni because I found the omni I was running on /nishio publicly useful enough.
  • 9/27 Thoughts on using a private OMNI
  • Vector search alone is pretty interesting.
    • If you imagine “a system in which books pop up and related pages open when you talk in front of a bookshelf,” you will understand how interesting it would be.
    • If someone else’s Scrapbox is hit, I read it interestingly, “Mr. X, I didn’t know you wrote about this,” which may lead to subsequent communication.
    • Difference in sense of public and private omni
    • I’m still using it differently.
      • So you must feel different utility, what is it?
    • Feelings differ depending on whether the data is primarily self-inflicted or not
    • When it’s self-derived, it’s like the AI and the human are driving the thinking as a unified entity.
      • I feel like I’m accelerating.”
      • Feel like you’re “seeing things from a different perspective.”
    • When derived from others, the feeling of “Oh, so this is what Mr. X said about this subject
”
      • I feel like I “found someone else’s statement.”
    • case
      • nishio.iconShould scientists who do basic research operate on their own dime, or should the government invest in them as public infrastructure that is underinvested if left to the market?
      • omni.iconThe “anti-fragility sic sic” fragment suggests the idea that government investment should be directed toward nonobjective activities, rather than research in general.
      • nishio.iconOh, I don’t know if that’s what you’re talking about, I’ll read it.
      • What the government should be spending money on is non-objective tinkering, not research --- Anti-Fragility top, p. 375

    • Fragments derived from oneself are reconstructed once chewed up inside oneself, so there is a sense of smooth connection. Fragments derived from others still have a hard surface.
      • However, it may be possible in the future for LLMs to do the “chewing and reconstructing”, “melting together”, and “smoothly connecting” themselves by fine-tuning with their own origin data.
    • Fragments of different viewpoints originating from oneself drive dialectic development more strongly than fragments of different viewpoints originating from others
      • Is it because I’m naive enough to pass off things of other people’s origin as “that’s one way of thinking” even if it differs from my current opinion?
      • The “it’s normal for others to have different opinions” runaround?
      • If your self-derived opinion differs from your current opinion, since you are both you, “Why do we disagree?” Since they are both me, does this strongly trigger the question, “Why do we have different opinions?
  • human.iconQ: By different viewpoints derived from yourself, do you mean that you think you are likely to think?
    • nishio.iconA: “What I thought in the past” is closer to
  • human.iconQ: I still wonder if this is a good idea because I keep “books I’ve read once”. Or is it useful if a phrase hits a vector search for a book I’ve never read?
    • nishio.iconA: Once you read a book, you don’t remember every detail.
      • I started using vector search and it unearths things like blog posts I wrote 8 years ago, but I wrote them myself and I don’t remember them.
      • Even a book you read hits you unexpectedly and you say, “Whoa, did I read that in this book? Oh, it sure does! I didn’t pay much attention to it when I read it before!” It’s like that.
      • It is said that a good book should be read many times, but in essence, it is an attribute on the part of the reader to be able to accept what is written.
      • In that sense, it would be a great reading experience to come across relevant pages of books related to “what I am interested in now” and read them.
  • human.iconQ: Is the identification of whether the source is from you or someone else based on whether it is a public or private Omni? Or is it based on the content?
    • nishio.iconA: In that sense, both are the same now that the private Omni does not contain self-derived data.
      • I like the idea of experimenting with mixing the two.

8: Vector search is an opportunity to cut out

  • image
  • 9/2 Hitting part of a series that has not been carved out provides an opportunity for carving out
  • Countless beneficial events that have occurred
    • Diaries, chat logs, lecture materials, transcripts of conversations, and other “time-lined descriptions
    • AI searches on a topic and hits that “in the middle of the sequence” and mentions it there.
    • The person who sees it cuts out that part of the page and creates a new page.
    • They did it with the vector search story.
      • I made an excerpt page with “Social Security Expenses” in the title.

    • Topic-oriented cutouts from time-aligned descriptions
    • The balance between time-based and topic-oriented is an important issue in organizing and interpreting information.æ™‚é–“è»žă«æČżăŁăŸæƒ…ć ±ăźæ•Žç†ăŻă€æƒ…ć ±ăźæ”ă‚Œă‚’èżœäœ“éš“ă™ă‚‹ăźă«é©ă—ăŠă„ă‚‹ăŒă€ç‰čćźšăźăƒˆăƒ”ăƒƒă‚Żă‚„ăƒ†ăƒŒăƒžæ€§ă‚’æŽąæ±‚ă™ă‚‹éš›ă«ăŻă€ăƒˆăƒ”ăƒƒă‚ŻæŒ‡ć‘ăźæ–°ă—ă„æ§‹é€ ă‚’äœœă‚‹ăŸă‚ă«æ™‚çł»ćˆ—ăźæ§‹é€ ă‚’ç ŽćŁŠă™ă‚‹ćż…èŠăŒă‚ă‚‹ă‹ă‚‚ă—ă‚ŒăȘい。侀æ–čă€ăƒˆăƒ”ăƒƒă‚ŻæŒ‡ć‘ăźæ•Žç†ăŻă€æƒ…ć ±ă‚’ç‰čćźšăźăƒ†ăƒŒăƒžă‚„ă‚łăƒłăƒ†ă‚­ă‚čトにćŸșă„ă„ăŠæ•Žç†ă™ă‚‹ă“ăšă‚’ćŻèƒœă«ă™ă‚‹ă€‚ă“ă‚Œă‚‰ăźé–“ă§ăźé©ćˆ‡ăȘバランă‚čă‚’èŠ‹ă€ă‘ă‚‹ă“ăšăŒă€æƒ…ć ±ăźćŠč率的ăȘæ•Žç†ăšç†è§Łă‚’äżƒé€Čする。omni.icon

  • Difficult to do in advance - The appropriate way to cut out the need is determined after the need is identified.
  • Explanation of the concept of “cutting” as many people may be unfamiliar with it.
    • In the realm of personal knowledge management, there is a belief that a short page on a single topic is preferable
    • So the act of extracting a “single topic” from a “page with multiple topics mixed in” is performed.
    • In Scrapbox, this can be done by selecting multiple lines and choosing New Page from the balloon menu.
      • This is commonly known as “cutting out” and as the name suggests, “cutting” and creating new pages and linking them to each other
        • Whether “cut” or “copy” is better is debatable, and I don’t always think cut is better either, there is value in keeping lecture materials readable through.
          • With the minutes being placed in groupware, or something like that, users would be resistant to editing them.
          • Scrapbox’s philosophy of “Not a warehouse for dead text.” would say, “Don’t use it that way.”
  • Behavior due to the fact that it is a vector search for fragments engraved in chunks
    • I’m chopping at 500 tokens right now, which is about the size of a page in a book.
    • With chat logs, it’s not an entire thread or one specific person’s statement, but a unit of “several exchanges”.
    • The search returns “the most dense mentions of the topic” in a long conversation.
    • A human discards the unwanted parts of that chunk, picks the relevant ones around it, or writes a new opinion inspired by it and creates a new page, which is beneficial.
  • Scrapbox is designed to afford cutouts, which is beneficial when combined with LLM
  • human.iconCutting it out could be a problem because you lose the context of the timeline.
    • nishio.iconYes
  • human.iconI see. →I see what you mean.
    • nishio.iconThe author’s philosophy, yes, but I personally think “let’s put everything in” is better.
  • human.iconMight it actually be beneficial to keep a stack of books in it?
    • nishio.iconIt’s better to put it in this system because it doesn’t speak to you when it’s piled up, right? Well, it would eliminate the opportunity to “kind of pick it up” or something like that, because it would lose its actuality.

9: Vector search serves as a tool for cognitive resolution.

  • image
  • from 10/3 Teaching LLM to Knowledge Brewing in Scrapbox
  • Vector search works as a “mechanism to scrape together fragments of similar topics.
  • human.iconSo you’re substituting a dialogue with yourself with the past for a dialogue with yourself with a different seed: 

    • nishio.iconThat’s one thing, but I also think that what we think is influenced by “the situation we were in at the time,” as I look at past articles that came up in a vector search.
      • I imagine that even at this moment, writing in this Scrapbox, in another Scrapbox, on a social networking service, or in the company’s groupware produces a slightly different intellectual output than writing in another Scrapbox.
      • By bundling them together again
  • human.iconMr. Nishio is experiencing self-expansion! For Mr. Nishio, how many times more, 1.3x or 2.4x, does it feel?
    • nishio.iconMaybe 1.2x - not over 2.0 w

Below are my notes before writing The following is a summary of the information enteredgpt.icon

  • 7/29: AI writes to Scrapbox, introducing the concept of Scrapbox Agents.
  • Early to mid-August: Omoikane study group, discussion on working with AI, writing research notes with AI, evolution of vector search, and topics on handling AI-generated pages.
  • Mid to end of August: topics related to updating and search, taking into account multi-head, page memory, and user cognitive load.
  • Beginning of September: Explore AI-user interaction, note management, and relevance between different content.
  • Mid-September: various topics related to multi-head thinking, the intellectual production techniques of engineers, and working with AI.
  • Late September: LLM vs. other models, discussion on human concepts, optimal use of Scrapbox, and feedback on non-public tools. Overall, this period seems to focus on Scrapbox and AI integration, particularly the concepts of vector search and multi-heading, and AI thinking and interaction with the user.

Jul 29 AI writes to Scrapbox - Agents living in Scrapbox. 8/4 Omoikane Study Group

8/11 [/omoikane/ consulted GPT4 on how to proceed after this 2023-08-11](https://scrapbox.io/omoikane/ consulted GPT4 on how to proceed after this 2023-08-11).

8/12 - Co-operation with AI

8/12 AI writes research notes daily 8/16 16 8/16 Stacking Vector Search Results 8/16 overwrite mode. 8/18 Ignore AI-generated pages from vector search 8/18 A case study of a new combination with the assistance of AI 8/21 multi-head - page memory 8/23 Ignore used pages from all history

8/25 Update page limit 8/26 Pinning effect on the topic

8/29 Lift Ignore from Vector Search on AI-generated pages 8/30 Extending the Red Link with AI 8/31 Main branch stopped 8/31 Questions encourage verbalization, but there are different kinds of questions. 8/31 Introduction to ENCHI 8/31 Clarification of AI’s role is important. 8/31 Update Interval of AI Notes 8/31 A case study of SF prototyping for a junior high school student’s work experience /villagepump/mtane0412.icon 9/1 Page as a fluid process 9/1 Trade-off between speculation and development # distress 9/1 What is the role of AI in this project? 9/1 Is there an AI with multiple personalities? 9/1 Why not specify the purpose of each page of the AI note? 9/1 It’s buried at the bottom of the AI page. # Cause of distress 9/1 Discovering connections between different content 9/1 Summon other people’s AI to your diary /villagepump/inajob.icon 9/2 Difference in skin feel between vector search and RAG 9/2 Hitting part of a series that has not been carved out provides an opportunity for carving out +1 9/2 AI can’t rest because it will develop endless thoughts. # Cause of distress 9/2 The realization that you can read the URLs of other projects

9/2 Multi-head thinking epoch 9/2 Pages that sometimes emerge

9/3 - Multi-Head Thinking” and “The Engineer’s Art of Intellectual Production.” - Small convergent moves and divergence from them - Write the summary above.

9/4 - [[BELOW_IS_LESS_INTERESTING to BELOW_IS_AI_GENERATED.]]

Iterative Commenter Pioneer mode

~

9/11 LLM Course by Matsuo Lab

9/12

9/13

9/15

9/16

9/18

9/20 Integration in private project

2023/9/22

2023-09-24

2023-09-29


This page is auto-translated from /nishio/LLMă«ă‚ˆă‚‹çŸ„çš„ç”Ÿç”Łæ€§ć‘äžŠć‹‰ćŒ·äŒš using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.