Black Mirror / Light Mirror
Famously, Jurassic Park mathematician Ian Malcolm said -
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should." - Michael Crichton (1990)
We've been aware for some time there's a disconnect between technology as an art, and technology as a science. The episodic production Black Mirror takes technological artifacts and projects their interpersonal and societal effects to extreme, oftentimes dystopic ends. A common theme among the creative narratives in Black Mirror is the notion that all technology has some amount of intrinsic danger, with varying degrees of dramatic uncertainty around its uses and side-effects. Disappointingly, the ill-effects of technology on humanity are not always fiction. With every new scientific breakthrough, we move towards a real-life sci-fi existence not implausibly different from the cautionary tales told by past and present storytellers.
In recent years, the landscape of consumer technology has dealt with difficult issues including data collection and the misappropriation of personally identifiable biometric & anthropometric data, a breakdown of the policy-making process surrounding the tech industry, an abundance of black-box machine learning algorithms trained on heavily biased data, and much more . Crichton's challenge gives a sense of what we should be thinking about, but it's only a whimiscal prompt.
We propose a simple tool for critical analysis of new and emerging technologies, targeted towards the decision-making process of new systems, services, and products. Think of it as a structured method to account – and correct for – the unforeseen and unintended consequences of a technology.
We outline a two-dimensional matrix that helps enumerate possible outcomes, situating the assignment as an equally creative and logical exercise through a mixture of speculative design and deductive reasoning. This tool might be considered as part of an applied ethics workshop or discussion, helping scope risk when facing deliberately bad actors or outlining intended positive behaviors. Inside a pedagogical context, this exercise is designed to engage between the work of Dr. Casey Fiesler in  and the technology ethics curricula presented by Burton et al in .
The dimensions of the matrix include impact and influence. Impact describes what the technology effects, whether it's the weight of a wearable on the wrist, or clearing land to build a data center. Influence is inspired by Bronfenbrenner's Ecological Systems Theory (although situated in a different context) ; it describes the magnitude of the impact, from a single person to everyone globally. Use influence as a guide to take persepctives apart from your own, with a specific emphasis on ways technologies has either helped or hurt marginalized and underrepresented groups.
- Somatic: Effects on the body (including interoceptive & proprioceptive senses).
- Affective / Psychological: Effects on feeling, mood, the sense of wellbeing.
- Cognitive: Effects on judgement, ability to remember, learn, or reason.
- Behavioral: Effects on our behaviors.
- External: Effects on the environment, inclusive of other people and things.
- Individual: Affects the self.
- Microsystem: Affects family and significant others.
- Mesosystem: Affects a group of related individuals (e.g. schools and neighborhoods).
- Exosystem: Affects the society in which an individual or tribe is embedded (e.g. government, mass media).
- Macrosystem: Affects everyone, irrespective of society or culture (e.g. the internet).
- Chronosystem: Time-delayed effects (e.g. behavioral development in children).
Arranged in a matrix, the evaluative chart manifests like so:
This matrix may be filled out as completely as necessary to understand a new technology, whether it should be undertaken, and to enumerate the risks. Conversely, the table can also be used as a Light Mirror to brainstorm, qualify, and amplify positive aspects. We designed this to 'bring your own technology,' but a list of example prompts follows:
- Brain Computer Interfaces
- Eye Tracking & Facial Recognition
- Behavioral Biometrics
- Algorithmic Communication (e.g. automated translation & news summarization)
- Avatars & Visual Identity/Representation
- Self-Driving Vehicles
Many controversial innovations are justified by practitioners through various arguments; we argue some justifications are contraindications but impose helpful constraints on the ideation process of the matrix. Some of these rationales include:
- There are already killer robots, so it's fine if I make killer robots.
- If I don't make killer robots, someone else will anyway.
- Denial of Agency
- Someone else told me to build killer robots.
- Denial of Responsibility
- Killer robots don't kill people, the AI in the killer robots kills people.
- Killer robots pay big money. That money will be allocated and I deserve that money as much as anyone.
Spectulative narratives like Crichton's body of stories, Black Mirror, and many other works give us both intimidating and insightful tales from the future. In an ideal world, ethically-centric design would be incorporated into the daily practice of technology creators and an fundamental component of technology education . This exercise — partially motivated by the practice of design fiction — documents one way to think critically about emerging technology. We stress that a complete story isn't necessary to engage the mindset of a creative writer when thinking about new invention, especially when major elements of classic science fiction are edging closer to realm of possibility. This matrix was envisioned to encourage designers and technologists to reign in both potentially negative consequences and the positive, finding a balance between artful speculation and more logical, reasoned outcomes.
 Shoshana Zuboff on Surveillance Capitalism’s Threat to Democracy
 How to Teach Computer Ethics through Science Fiction
 Black Mirror, Light Mirror: Teaching Technology Ethics Through Speculation
 Nested or Networked? Future Directions for Ecological Systems Theory
 What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis