Perspective API Using machine learning to reduce toxicity online
API to judge [harmful comments
Conversation with Audrey Tang [Conversation with Yasmin Green and Örkesh Dölet|Ministry of Digital Affairs https://moda.gov.tw/en/press/background- information/8655] The Perspective API is a machine learning model developed by Jigsaw, a subsidiary of Google, with the following features
- It is intended to detect “toxicity” in online comments and messages. Toxicity refers to offensive content that insults or slanders the other party.
- AI model trained using large amounts of online comment data. When text is entered, it returns a score of the toxicity level of that text.
- It aims to promote healthy online discussion and constructive conversation. It can be embedded in social media platforms, news sites, blogs, etc.
- In addition to toxicity detection, the model is capable of detecting various types of inappropriate content, including discriminatory, threatening, and obscene language.
- On the other hand, there is a challenge that sarcasm and jokes are mis-detected as offensive because they are judged mechanically without understanding the context. It has also been pointed out that the system is biased because it is trained based on specific values.
- Currently, we are also working on building a model that learns the elements of “constructive conversation” and are trying to evolve it in a direction that not only removes offensive content, but also encourages productive discussion.
While it is attracting attention as a tool for healthy online discussion, it is important to keep in mind the limitations of mechanical judgments made by AI. It is advisable to utilize it in combination with human moderators.
This page is auto-translated from [/nishio/Perspective API](https://scrapbox.io/nishio/Perspective API) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.