An Ethics Case Study
A tech company plans to introduce a new product - a virtual assistant aimed specifically at children, tentatively called “Poppins.” Poppins will be located in a child’s bedroom. It will look like a teddy bear or other stuffed animal (parents will be able to choose the soft shell). It will initially serve as a baby monitor, with audio and video capabilities; it will also allow parents to sing to the baby from remote locations, or to play other music. As the child grows and begins to speak, Poppins will use AI to interact with him/her—becoming a personalized chatbot. It will be programmed to teach the child words and concepts (colors, animals, numbers, etc.); it will also respond to the child’s questions and read stories.
Over time, it will become a homework helper, dispense advice about human development, and eventually listen and respond to the needs of a teenager who might be embarrassed to ask his/her parents certain questions. (As the young person grows, should the stuffed animal “shell” seem outdated, the child will have the option to select a different decorative exterior for the inner core.)
Parents will be able to pre-teach Poppins the names of extended family members and their positions in the family tree; they will also be able to pre-load Poppins with family tales and anecdotes. In this sense, Poppins will also become a repository of family memories. There will be no need for parents or others to try to record the child’s cute sayings or perceptive questions—those will all be saved, since all of Poppins’ interactions with the child will be recorded and uploaded to the cloud, where sophisticated language analysis algorithms will improve as they process more communications.
Having learned from the publicly raised concerns about the internet-connected “Hello Barbie,” the creators of Poppins have gone to great lengths to encrypt all of the communications between Poppins, the child, and the servers of the toy company. They have also determined that they will not share those communications with any third parties (except AmOzone, which is their cloud services provider).
But they are aware that other ethical concerns remain. One of them involves privacy. The developers of Hello Barbie had built in a functionality that allowed all of a child’s conversations with Barbie to be sent to a parent’s phone via an app. Many of the designers and engineers involved in the creation of Poppins believe that such a feature is a violation of the child’s privacy. They are also particularly concerned because Poppins is designed to “grow” along with the child; they believe that a teenager has greater privacy rights than younger children do, and want to protect those rights. On the other hand, they are wondering whether there should be some exceptions: should Poppins, for example, report to a parent any suicidal ideations that a child might express? Should it report to authorities any statements that a child might make that would suggest abuse by a family member or caretaker? What ethical duties does a toy company take on once its products listen in in a child’s home?
Another consideration entails the very nature of child-rearing and caretaking. The creators of Poppins want to build a rich and responsive environment for children, but they also worry that by taking on some of the tasks mentioned above, they might undermine the bonds between children and their human caretakers. What virtues might be lost or depleted if parents didn’t have to sing (perhaps badly) to soothe their own children, or didn’t have to navigate the awkwardness of certain conversations with their kids?
What other ethical issues do you see raised by the scenario above, in terms of both benefits and harms? Since this is a product that involves AI, consider, also, what data it might be trained on; who would create the initial set of responses and resources that Poppins might offer; what kind of data portability might be provided; whether Poppins would have a gender (and what kind of voice would it have, given its audience); what would happen to the data collected if the toy company were to go out of business, etc.
For opportunities to compare and contrast this with related scenarios, see articles about two related products: the above-mentioned Hello Barbie, and a product that was initially announced but then cancelled (pre-distribution) in 2017 called “Aristotle” (an Aristotle with “the female voice of a perky, 25-year-old kindergarten teacher”).
Additional discussion questions:
- Who are the stakeholders involved? Who should be consulted about the project’s goals and development?
- What additional facts might be required? What practical steps might you need to take in order to access the information/perspectives needed to manage the ethical landscape of this project?
- What are some other ethical issues that any designers/developers of such a device would need to address?
- How might this project be evaluated through the various ethical 'lenses' described in the “Conceptual Frameworks” document?
- In this project, what moral values are potentially conflicting with each other? Is there any way to reconcile them? Even if conflict is unavoidable, are there ways to respect all relevant interests/values? How?
- As a project team, how might you go about sorting through these ethical issues and addressing them? Which of the ethical issues you have identified would you prioritize, and why?
- Who would be the appropriate persons on a team to take those steps? At what level, and by what methods, should decisions be made within the company about how to manage the ethical issues raised by this project?