image Deliberative Democracy Lab to Demonstrate Deliberative PollingĀ® Method at the 2023 Nobel Prize Summit | FSI

@lessig: The 2023 Noble Prize Summit will have an extensive demonstration of the Deliberative Polling Method with AI-Assisted Stanford Online Platform at Nobel Prize Summitā€”You can participate!

  • (ja) The 2023 Nobel Prize Summit will feature an extensive demonstration of the deliberative voting method using the AI-assisted Stanford online platform. You are invited to participate.
  • Stanford Online Deliberation Platform

schedule

I thought it might be something like Polis, but I got an email that said ā€œtest the camera and microphone in advance,ā€ so it looks like weā€™ll be doing voice communication.

Small Group Session 1: 12:00-1:00 Plenary Panel: 1:00-1:40p BREAK 20 MINS Small Group Session 2: 2p-3p Plenary Panel: 3p-3:40p

  • Plenary: (pedagogy) Part of a lesson, usually at or towards the end, designed to review or evaluate the learning that has taken place.
  • So, in essence, youā€™re saying that after the breakout room thing is over, weā€™re all going to get together and have a session.

Pre-listening video and automatic transcription multiplied by GPT3 is written below. https://www.youtube.com/watch?v=PEnPu_Pby2k

  • Social media has influenced the way we interact with each other and the way we obtain information. Because of its pervasive influence, some people argue that we should consider whether social media platforms should be more regulated and, if so, what the approach should be. Some people argue for significant regulation of social media companies, while others argue that they can self-regulate. There are many possible approaches within self-regulation, but this video cannot cover them all, so in this video we will talk about fact-checking in the context of hoaxes. Together we will explore some policy suggestions.
  • The first proposal is that fact-checking mechanisms should be vetted by independent and statistically representative citizen oversight bodies. This proposal seeks to improve the legitimacy of fact-checking mechanisms. This is because the current fact-checking mechanisms have been shown to have a bias toward one side of the political spectrum. While numerous fact-checking organizations currently exist, citizen oversight is non-existent. This independent citizen agency would be statistically representative of the general public, meaning that everyone would have an equal opportunity to be selected as a member of the oversight body. Proponents of this proposal argue that having a demographically representative group of citizens conducting oversight adds legitimacy and credibility to the fact-checking process, as existing fact-checking organizations are viewed by some as having a bias. On the other hand, others argue that this additional mechanism is unnecessary because many fact-checking organizations already exist and additional citizen oversight would not change the outcome of the fact-checking process. Some experts warn that it may be difficult for a representative cross-section of the population to properly evaluate fact-checking mechanisms in a bias-free manner, and failure to reach consensus could undermine credibility.
  • The second proposal is that platforms should notify users exposed to misinformation after the fact. This proposal seeks to reduce the harmful effects of the spread of misinformation on social media platforms. In other words, if a piece of content is found to be erroneous or misguided after it has been posted, the platform should notify the users who engaged with or viewed that content.
  • For the purposes of this discussion, the labeling of misinformation shall be overseen by a group of citizens similar to the previous proposal. Some have argued that this proposal would be relatively easy to implement This is because many social media companies already have fact-checking mechanisms in place. The policy would simply link fact-checking to people who have seen or interacted with a particular post as an additional step and notify them.
  • While the proposed policy may help combat the spread of falsehoods, it also raises privacy and free speech concerns. Some users may be uncomfortable with the platform tracking their interactions with their content. Some might also argue that the policy could stifle free speech by labeling certain ideas as false or misguided. Finally, users may not want to see fact-check notices, which could render them ineffective.
  • Now, letā€™s open the discussion. While we may not have enough time to discuss specific implementation details, we would like to hear your views on whether there should be a fact-checking mechanism. Whether you agree or disagree with these proposals, we want to hear from you. Your opinion matters.
  • While fact-checking mechanisms can prevent the spread of falsehoods and misinformation, they also raise concerns such as privacy and free speech. Some argue that citizen participation in oversight bodies adds legitimacy and credibility to the fact-checking process, but in practice, proper evaluation and consensus building may be difficult.
  • Please share your thoughts on what mechanisms should be in place. Please let us know why you agree or disagree with these suggestions for a fact-checking mechanism, or if you have other ideas or suggestions. It is important for us to hear from you.

https://www.youtube.com/watch?v=l5Zi4asDbGk prebunking

Social media has influenced the way we interact with each other and the way we get information. Because of its pervasive influence, some people argue that we should consider whether we should regulate more about social media platforms and, if so, what the approach should be. Some people argue for significant regulation of social media companies, while others argue that they can self-regulate. There are many possible approaches within self-regulation, but this video cannot cover them all, so letā€™s talk about them in the context of the hoax about digital literacy in this video.

Letā€™s explore the policy proposal together. The proposal is that platforms should provide educational announcements that include pre-banking educational content in usersā€™ feeds. The proposal seeks to improve usersā€™ ability to detect hoaxes and immunize themselves from the risk of exposure to hoaxes.

Social media platforms have become a major source of news, and some argue that because of this, it is important for social media platforms to know how users evaluate information and that they have a responsibility to provide educational content to users. One way to do this is to run public service announcements (PSAs) and include educational content, including pre-banking information, in them. Prebanking here refers to messages that teach users about tactics commonly used in the production of hoaxes. Examples include emotional manipulative language, ambiguity, false second-guessing, scapegoating, and personal attacks.

Some studies have shown that pre-banking significantly improves peopleā€™s ability to discern fake news and decreases their susceptibility to common hoax tactics. Proponents of this first proposition argue that educational announcements help users build the critical thinking skills necessary to properly evaluate the vast amount of information they consume on social media platforms. It is also believed to help reduce the flow of hoaxes by helping users learn to better detect them.

On the other hand, some argue that these educational notices may be infringing to users. Others argue that educational content may not be effective in changing user behavior. In addition, some argue that such educational announcements may be perceived as prejudicial or unfair to certain types of content.

Now, letā€™s open the discussion. We may not have enough time to discuss the specifics of implementation, but please share your thoughts on whether there should be a requirement for digital literacy. Please let us know why you agree or disagree with these suggestions, or if you have other ideas or suggestions. We want to hear from you.

https://www.youtube.com/watch?v=wtZu9u3V250 Social media has influenced the way we interact with each other and the way we obtain information. Because of its pervasive influence, some people argue that we should consider whether social media platforms should be more regulated and, if so, what the approach should be. Some people argue for significant regulation of social media companies, while others argue that they can self-regulate. There are many possible approaches within self-regulation, but this video cannot cover them all, so in this video we will talk about research and measurement in the context of hoaxes.

Together we will explore several policy proposals. The first suggestion is that platforms should allow academic institutions, journalists, non-profit organizations, and their researchers easy access to data in a privacy-compliant manner. This proposal addresses the concern that platforms may not be fully aware of the harmful effects of their own algorithms and product features, and even if they are aware, they may not disclose the information transparently. Providing researchers with access to data would help improve the publicā€™s understanding of the impact of social media. Researchers at these institutions are believed to play an important role in studying the impact of social media platforms. Unfortunately, little data is available to researchers; laws such as the EUā€™s Digital Services Act include provisions for sharing platform data with researchers, but respect privacy laws. Major social media platforms have made voluntary efforts to support bona fide research into hoaxes involving their services, but despite these efforts, they have not made progress in providing data to the research community. Proponents of this first proposal argue that platforms should allow researchers easy access to data to enable them to conduct meaningful research on the impact of social media on society. This would allow for an independent assessment of the impact of social media platforms on the world, which in turn would allow for an assessment of the impact of social media platforms on the world. On the other hand, some argue that it is difficult to identify which researchers are qualified and to prevent malicious individuals from misusing scientific user data by posing as researchers. The potential risks and costs of sharing usersā€™ private information may outweigh the benefits. The development of technology to provide secure data access would have a substantial financial cost to the platform. In addition, disclosure of sensitive corporate information could lead to a loss of competitive advantage, which may affect the profits of social media platforms.

The second suggestion is that the platform should work with researchers to measure conversation health indicators on a quarterly basis. This suggestion relates to understanding how certain algorithmic and product features affect conversational health. Understanding these effects can help distinguish between good and bad features and can also stimulate platforms to develop more of the good features. For example, a platform can measure the polarity index. This index measures the degree of hostility or conflict between two or more groups on a particular issue. Platforms can also measure the diversity of content. These measure whether users are exposed to a variety of viewpoints. These are examples of indicators that can be used to measure the health of a conversation.

Platforms often conduct user testing to measure user engagement. This is to measure how well new products on the platform, such as emoji or news feed features, are accepted by users. Conversation health metrics could be incorporated into these user testing activities. Proponents of this second proposal argue that having indicators to measure conversational health will create incentives for platforms to develop algorithms and other mechanisms to promote constructive dialogue among citizens. This would allow platforms to score well on these indicators and lead to better outcomes for society as a whole. These indicators will also help users choose which platforms to use.

On the other hand, it is not clear how these indicators affect users. With improper implementation, these indicators could facilitate monitoring of user conversations and have a deterrent effect on users. For example, a dictatorship might measure and respond to protests or attempts to organize anti-government. In addition, these conversation indicators are imperfect and, if not rigorously measured, may lead to erroneous conclusions. It is also not clear what the desired level of indicators should be. For example, they are bad when extreme conflict divides society, but useful when they promote protest and reform. Indicators of such a conversation could be easily obtained by someone with bad intentions; with the assistance of AI, it may be even easier.

Now, letā€™s open the discussion. We may not have enough time to discuss the specifics of implementation, but please share your views on whether there should be regulations on research and measurement regarding hoaxes. Please let us know why you agree or disagree with these proposals, or if you have other ideas or suggestions. It is important for us to hear from you.

https://www.youtube.com/watch?v=5iO_8lmwpZw Social media has influenced the way we interact with each other and the way we obtain information. Because of its pervasive influence, some people argue that we should consider whether social media platforms should be more regulated and, if so, what the approach should be. Some people argue for significant regulation of social media companies, while others argue that they can self-regulate. There are many possible approaches within self-regulation, but this video cannot cover them all, so in this video we will talk about algorithmic regulation in the context of hoaxes.

Letā€™s explore the policy proposal together. The proposal addresses concerns that platforms may be creating isolated information bubbles and increasing social polarization. The proposal argues that platforms should limit the amount of content targeting or filtering based on personal information, interests, and past user behavior. Many social media platforms rely on content recommendation algorithms to prioritize content that appears in usersā€™ feeds. These algorithms use personal information such as user likes, friends, and location to optimize the algorithm and maximize user engagement. While the use of personal information creates a personalized user experience and increases engagement, some argue that this increases polarization. This is because prioritization algorithms tend to over-target content and reduce exposure to diverse content.

This proposal would display a portion of the content that does not use personal information as if it were the first user before the userā€™s information was collected. This does not mean that the content is random. There may still be algorithms in place to provide non-personalized content. Proponents of this proposal argue that algorithms promote diversity and fairness in content when they do not always use personal information. Platforms would have less ability to display biased content, thus reducing the likelihood of reinforcing or polarizing user beliefs. This approach is similar to the regulation of television broadcasting in many countries.

On the other hand, opponents of the proposal argue that it could decrease the relevance of the content displayed to users and worsen the user experience. This could decrease engagement and advertising revenue. The proposal also claims that exposure to conflicting opinions could increase polarization and radicalization. It is further argued that regulation of content recommendation algorithms could be viewed as regulation of speech.

Now, letā€™s open the discussion. We may not have enough time to discuss the specifics of implementation, but please share your views on whether there should be regulations on algorithms. Please let us know why you agree or disagree with these proposals, or if you have other ideas or suggestions. It is important for us to hear from you.

https://www.youtube.com/watch?v=UAEVcBm5b7A Social media has influenced the way we interact with each other and the way we obtain information. Because of its pervasive influence, some people argue that we should consider whether social media platforms should be more regulated and, if so, what the approach should be. Some people argue for significant regulation of social media companies, while others argue that they can self-regulate. There are many possible approaches within self-regulation, but this video cannot cover them all, so in this video we will talk about authenticity in the context of hoaxes.

Letā€™s explore the policy proposals together. The first proposal is that platforms should be required to associate users with unique anonymous digital identifiers. This proposal aims to reduce the negative impact of bots by allowing them to properly recognize whether a message comes from an individual or is simply designed to play multiple persons. This would allow for proper labeling and filtering of bot messages. The proposal also requires that the identity of the botā€™s operator be associated with the operatorā€™s digital identifier. While some platforms propose to use real names, users will not be required to confirm their identities. This anonymity allows users to express their opinions without fear of social reaction, creating open and free discussion. On the other hand, however, anonymity in the digital space can also have harmful consequences, such as hate speech, harassment, and incitement to violence, due to lack of responsibility and transparency. In addition, the proposal is also intended to address the behavior of malicious bots, which some claim is on the rise. Proponents of the proposal argue that this would allow accounts and posts to be associated with a single person, allowing platforms to identify bots, prevent bad behavior, and improve courtesy on social media platforms. On the one hand, users would retain their privacy and their actual identities would remain hidden from social media companies and other online users. On the other hand, some argue that the proposal could lead to potential misuse and surveillance, and could stifle dissenting voices. In addition, unique anonymous digital identifiers may raise privacy and security concerns. A data breach or leak could expose a userā€™s actual identity digitally. Others argue that bots play only a limited role in spreading hoaxes and hateful content, and that much undesirable content is spread by known or confirmed users.

The second proposal is that companies developing generative AI need to implement technology that can detect whether certain content on their platforms is generated by AI. This proposal seeks to reduce the use of generative AI in the creation of hoaxes and hateful content. Generative AI refers to artificial intelligence models that generate content such as text, images, audio, and video with aesthetic features that are indistinguishable from human-generated content. They also raise concerns about the use of AI-generated content for malicious purposes, such as spreading fake news, impersonating individuals, and committing fraud. Some argue that this proposal would allow us to determine the authenticity of content and limit the use of generated AI for malicious purposes. Others, on the other hand, argue that detection of AI-generated content says nothing about its truthfulness. Since human-generated content can also contain truth, falsehoods, and misunderstandings, additional measures are needed to determine the accuracy of AI-generated content. In addition, current detection mechanisms are imprecise, failing to detect most AI-generated content and misclassifying some human-generated content as AI-generated content; as AI becomes more sophisticated, detection will become more difficult.

Now, letā€™s open the discussion. We may not have enough time to discuss the specifics of implementation, but please share your views on whether or not regulations on authenticity are needed. Please let us know why you agree or disagree with these proposals, or if you have other ideas or suggestions. It is important for us to hear from you.

Japanese Explanation

AkioHoshi If you are interested in digital democracy, check it out. Demonstration of a discussion-based public opinion survey (TRUTH, TRUST AND HOPE, DP) at the Nobel Prize Summit (May 24-26, Washington, DC). Conduct a large group deliberative exercise on the topic of ā€œMisinformation and Polarization on the Net and What to Do About It.ā€

AkioHoshi Debate-based polling (DP) is a method of deliberative democracy proposed by Professor Fishkin of Stanford University in 1988. There are examples of its implementation in many countries.

In Japan, a ā€œDebate-based Public Opinion Survey on Energy and Environmental Optionsā€ on nuclear power was conducted in 2012, and the results were communicated to the then Democratic Party of Japan (DPJ) administration. Previous Debate-based Public Opinion Surveys in Japan | KeioDP Keio University DP Research Center

AkioHoshi In recent years, this DP has been made AI-supported by the Stanford Online Deliberation Platform (Stanford Online Deliberation Platform), an AI-supported online support tool. This tool has anti-abuse and real-time analysis capabilities.

AkioHoshi [Stanford Online Deliberation Platform Functional Description. You can participate from Japan.

AkioHoshi Based on the paper on the Stanford Online Deliberation Platform, an online deliberation tool, I would like to introduce some of the features of the tool. I would like to introduce some of the features of the tool based on the paper of Stanford Online Deliberation Platform. ā€œModeration Automation functionā€ is the most important feature. (continued)

  • Deliberative Democracy with the Online Deliberation Platform (PDF)
  • Abstract

  • We introduce the Stanford Online Deliberation Platform, a web-based platform that facilitates constructive discussions on civic issues with the use of an automated moderator. The automated moderator performs this function by stimulating participants to consider arguments from both sides of all proposals, maintaining civility in the discussion, encouraging equitable participation by all participants, and providing a structured collaboration phase for participants to come up with a small set of questions or action items. We will demo the functionality of this platform in the context of its primary intended application, that of online Deliberative Polling.

    • (DeepL) We present the Stanford Online Deliberation Platform, a web-based platform that uses automated moderators to facilitate constructive discussion of civil issues. Automated moderators perform this function by encouraging participants to consider arguments from both sides of any proposal, maintaining civility in the discussion, encouraging fair participation by all participants, and providing a structured collaboration phase for participants to come up with small sets of questions and action items We will We will demonstrate the functionality of this platform in the context of its primary use: online argumentative voting.

AkioHoshi - Speakers wait their turn in the queue and speak within the time limit

  • Participants encouraged to follow agenda by nudging
  • Automatic transcription of remarks. If objectionable content is detected or the agenda conversation appears to be stalled, the bot solicits feedback from participants and decides whether to block the user or advance the agenda

AkioHoshi - In the future, automatic natural language processing (NLP) for agenda management, automatic flagging of new content, relevance scoring of claims, etc. Plans for additions.

Impressions: the possibility of moderation automation is being explored. Relevance scoring is reminiscent of pol.is. ā€œAutomated moderation of deliberationsā€ is an interesting initiative today, and an interesting use of AI. If it works well, I think it will enable large scale debated polling, and will be a major step forward in the possibilities for digital democracy.

Deliberative


This page is auto-translated from [/nishio/TRUTH, TRUST AND HOPE](https://scrapbox.io/nishio/TRUTH, TRUST AND HOPE) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iā€™m very happy to spread my thought to non-Japanese readers.