An AI Ethics Case Study
Every day, millions of people input prompts (whether questions or instructions) into AI tools such as ChatGPT, Perplexity, Claude, DALL-E, or Meta AI. Recently, media coverage highlighted what seemed to be a gap in awareness for many users of the latter: people could read the “conversations” that strangers were having with Meta’s chatbot—including both the prompts and the replies—some of which were “threads about medical topics, and other… delicate and private issues.”
Meta’s AI app includes a visible “Discover” feed, intended to make AI interactions “social” (Meta has argued that users have to take several action steps in order to share those chats). In contrast, other less “social” chatbots might seem more privacy protective, but still use people’s prompts as training material to be incorporated into the models powering those tools. OpenAI, for example, states that “ChatGPT, for instance, improves by further training on the conversations people have with it, unless you opt out” (adding that its models do not train “on any inputs or outputs from… products for business users”).
A related issue is that of data leakage from models. A primer on AI privacy, published by IBM, offers one example: “consider a healthcare company that builds an in-house, AI-powered diagnostic app based on its customers’ data. That app might unintentionally leak customers’ private information to other customers who happen to use a particular prompt.”
In 2023, Google researchers were able to “extract over 10,000 unique verbatim memorized training examples” from ChatGPT, including “personal information from dozens of real individuals.” Since then, the number of AI chatbots with which people can interact has greatly expanded, but many people still don’t realize the privacy implications of their prompts.
Discussion Questions:
Before answering these questions, please review the Markkula Center for Applied Ethics’ Framework for Ethical Decision-Making, which details the ethical lenses referenced below.
- Who are the stakeholders involved in this case--the individuals, groups, and organizations who are directly or indirectly impacted by prompt-related privacy issues?
- Consider the case through the lenses of rights, justice, utilitarianism, the common good, virtue, and care ethics; what aspects related to AI prompts and privacy does each of them highlight?
- Which stakeholders are in the best position to educate chatbot users about the privacy implications of prompting?
- Given the risks of data leakage (and intentional exfiltration by attackers), are there contexts in which chatbot usage should be restricted, or in which chatbot developers should be required not to retain and use user prompts for training/improving their models? If so, what are those contexts?
Image: Jamillah Knowles & Reset.Tech Australia – cropped / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/