Ethical Innovation Means Giving Society a Say
Headlines from Silicon Valley sometimes read like mythology, fantasy, or science fiction. “First human-pig ‘chimera’ created in milestone study” revives images of the Greek mythological beast (part lion, part goat, part serpent). “No Longer a Dream: Silicon Valley Takes on the Flying Car” describes the Kitty Hawk, whose name pays homage to the Wright brothers with a subtle nod to Chitty Chitty Bang Bang. And “Oh Great, Now Alexa Will Judge Your Outfits, Too” tells of an innovation that combines a photoshoot and a fashion critique in the privacy of your closet. Novelties like these make last year’s driverless cars and personal shopper bots look quaint. But how should society think about the decisions linking innovation to the consumer and to society more broadly?
I’m a staunch advocate for innovation, even if much of the technology and science exceeds my understanding. I argue that done right, ethical decision-making spurs positive innovation and deters innovation-stifling regulatory barriers.
Technology is approaching the man-machine and man-animal boundaries. And with this, society may be leaping into humanity-defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?
Who gets to control innovation is a central question of our time. Should society let technological prowess or scientific brilliance determine who makes decisions that may affect all humanity in profound ways?
In academic institutions, guidelines govern how scientists conduct research; an institutional review board, for example, oversees experiments on human subjects. In institutions with multiple stakeholders, (such as the GAVI vaccine alliance, which aims to unite the private and public sectors to provide immunizations to children), diverse voices from governments to private foundations to NGOs chime in to make policy.
Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?
Jennifer Doudna and Emmanuelle Charpentier’s landmark 2014 article in Science, “The new frontier of genome engineering with CRISPR-Cas9,” called for a broader discussion among “scientists and society at large” about the technology’s responsible use. Other leading scientists have joined the call for caution before the technique is intentionally used to alter the human germ line. The National Academies of Science, Engineering, and Medicine recently issued a report recommending that the ethical framework applied to gene therapy also be used when considering Crispr applications. In effect, the experts ask whether their scientific brilliance should legitimize them as decision-makers for all of us.
Crispr might prevent Huntington’s disease and cure cancer. But should errors occur, it’s hard to predict the outcome or prevent its benign use (by thoughtful and competent people) or misuse (by ill-intentioned actors).
Who should decide how Crispr should be used: Scientists? Regulators? Something in between, such as an academic institution, medical research establishment, or professional/industry association? The public? Which public, given the global impact of the decisions? Are ordinary citizens equipped to make such technologically complex ethical decisions? Who will inform the decision-makers about possible risks and benefits?
Elon Musk of Tesla and Regina Dugan of Facebook are each working on an interface between the human brain and computers. I’m inclined to listen to nearly anything Elon Musk and Regina Dugan propose. But the public should discuss how to embed ethical judgment in the brain enhancers, and how to bring tools for ethical decision-making along on the ride to Mars.
Technologists tell us that driverless cars still have no judgment. They can’t choose between two terrible options, like hitting an elderly woman or hitting three children. But the human beings programming them can. Presumably even the unlicensed pilots of flying cars have judgment. But who will police the skies for safety or environmental protection?
Some extraordinary innovators do talk about the ethics of their emerging technologies. Twitter co-founder Biz Stone describes efforts to create an ethical culture inside the company in his book Things A Little Bird Taught Me. And the Partnership on AI, a consortium created by some of tech’s biggest names, aims to create a “place for open critique and reflection” and to bring together “all with an interest in” their exploration of AI. But these are not yet society-wide conversations.
The solutions should involve integrating ethics earlier and more rigorously into decision-making, improved disclosure, and technology.
First, more real-time, creative, and thorough analysis of the actual and potential consequences of an innovation in the short, medium, and long term should accompany the innovation process. I agree with Jennifer Doudna and Emmanuel Charpentier that a broader societal conversation must precede certain innovations. It can include consumers, experts, and regulators—whether hosted through Facebook, at town hall meetings, or through other means.
Second, consumers should better understand how technologies operate and what their risks are. Take the Echo Look: Amazon does not disclose the theory behind Echo Look’s algorithm rating our fashion sense, which means customers are using a product without a chance to consent to any potential risk. Is the algorithm trained to reflect the opinions of Vogue editors or 2000 diverse teenagers? Amazon could disclose more about the algorithm without revealing proprietary secrets, improve the privacy protection, and accept more responsibility.
Social media companies could include executive summaries in their often-impenetrable Terms of Service, describing an innovation’s unique risks in succinct terms and plain English.
Third, ethics must be embedded in technology. Many companies are trying, from Facebook’s battle against livestreaming crimes to DeepMind, which has an internal ethics board but hasn’t revealed publicly who is on it. The ethics questions about innovation should not be a binary yes or no, or even now or later. The questions should be to what extent and under what conditions—and who decides?
This article was first published in Wired on June 13, 2017