Polis is a platform that leverages machine intelligence to scale up deliberative processes. In this paper, we explore the opportunities and risks associated with applying Large Language Models (LLMs) towards challenges with facilitating, moderating and summarizing the results of Polis engagements. In particular, we demonstrate with pilot experiments using Anthropic’s Claude that LLMs can indeed augment human intelligence to help more efficiently run Polis conversations. In particular, we find that summarization capabilities enable categorically new methods with immense promise to empower the public in collective meaning-making exercises. And notably, LLM context limitations have a significant impact on insight and quality of these results. However, these opportunities come with risks. We discuss some of these risks, as well as principles and techniques for characterizing and mitigating them, and the implications for other deliberative or political systems that may employ LLMs. Finally, we conclude with several open future research directions for augmenting tools like Polis with LLMs.
(DeepL)Polis is a platform that leverages machine intelligence to scale up the careful deliberation process. In this paper, we explore the opportunities and risks of applying large-scale language models (LLMs) to the challenge of facilitating, moderating, and summarizing the results of Polis engagements. In particular, a pilot experiment using Claude from Anthropic will demonstrate that LLMs can augment human intelligence to perform Polis conversations more efficiently. In particular, we find that the summarization feature enables a very promising new way for the public to make collective meaning. And in particular, the limitations of the LLM context have a significant impact on the insight and quality of these results. However, these opportunities also come with risks. We discuss some of these risks, as well as principles and techniques for characterizing and mitigating them, and their implications for other deliberative and political systems that might adopt LLM. Finally, we conclude with some open future research directions for augmenting tools like Polis with LLM.
h_okumura Is this Polis the one Nishio-san often talks about? nishio Yes, this is the author’s affiliation The Computational Democracy Project. It looks like an interesting paper! The Computational Democracy Project | The Computational Democracy Project
-
The Computational Democracy Project designs, engineers and maintains Polis, an open source, real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning.
nishio I heard that Anthropic has published a paper on the relationship between LLM and Polis, and it is getting more and more interesting! nishio I’m not sure I’ll be in a position to talk about what I’m working on at the next LLM meetup - I was wondering what I could talk about, but this paper and OpenAI I think it’s enough for an LT to talk about what I can talk about while citing what I wrote in this paper and OpenAI. I mentioned Polis in my last LT, so this is kind of an expansion of that.
- Previous: Relationship between LLM and Plurality
This page is auto-translated from [/nishio/Opportunities and Risks of LLMs for Scalable Deliberation with Polis](https://scrapbox.io/nishio/Opportunities and Risks of LLMs for Scalable Deliberation with Polis) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.