Recently, large language models have made huge advances in generating coherent, creative text. While much research focuses on how users can interact with language models, less work considers the social-technical gap that this technology poses. What are the social nuances that underlie receiving support from a generative AI? In this work we ask when and why a creative writer might turn to a computer versus a peer or mentor for support. We interview 20 creative writers about their writing practice and their attitudes towards both human and computer support. We discover three elements that govern a writer’s interaction with support actors: 1) what writers desire help with, 2) how writers perceive potential support actors, and 3) the values writers hold. We align our results with existing frameworks of writing cognition and creativity support, uncovering the social dynamics which modulate user responses to generative technologies.
- https://dl.acm.org/doi/10.1145/3544548.3580782
- (DeepL) In recent years, large-scale language models have made great strides in generating coherent and creative text. While much research has focused on how users can interact with language models, few studies have considered the social and technological gaps that this technology presents. What are the social nuances underlying receiving support from generative AI? This study asks when and why creative writers turn to computers rather than colleagues or mentors for support: 20 creative writers were interviewed about their writing and their attitudes toward human and computer support. We found three factors governing the interaction between writers and their supporters: 1) what writers want support for, 2) how writers perceive potential supporters, and 3) the values they hold. We align our results with existing frameworks of writing cognition and creativity support to identify the social dynamics that modulate user responses to generative technologies.
Discussion 5.1.2
- Researchers tend to think of pre-trained language models as general purpose, when in fact they provide a specific single perspective
- In short, white men in the West.
- Providing models trained to provide Different Perspectives helps users understand that “models have specific perspectives.
- But it does not represent that viewpoint.
- Having a queer friend read the novel ensures that it is read from a queer perspective
- But its readers don’t speak for all queers in their impressions.
- Nor is the author asking for such feedback.
- I see.
- To translate /omoikane into context, the author does not want “Japanese Culture AI” as a representative of Japanese people, but “Hanako-san AI” as a personal character who is familiar with Japanese culture?
This page is auto-translated from [/nishio/Social Dynamics of AI Support in Creative Writing](https://scrapbox.io/nishio/Social Dynamics of AI Support in Creative Writing) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.