I’m convinced.
- The order in which the prompts are listed changes the results.
- When you actually do it, you’ll be like, “How can such a thing change the performance so much?”(A)
- But since they can’t give an understandable rationale, they feel pain and don’t want to do it.
- Let’s let the machine do the “things we don’t want to do”.
- So it doesn’t stick with people who didn’t do A in the first place.
- I thought I was doing A, too, but it still didn’t ring a bell.
- I understand that doing it will increase performance, but it’s a pain - eventually the base model itself will improve, so who cares?”
- Explanation that I can understand.
- When choosing a Few-shot example, which of the candidates
- I’m convinced I saw this.
This page is auto-translated from /nishio/自動プロンプトエンジニアリング using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.