An epidemic of risk and reputational hazards

This article was first published in TechCrunch on August 21, 2017

Going viral is central to technology and business today, but one of its most pernicious consequences should get more attention than it does: the virally contagious nature of unethical behavior.

Ethics contagion can infect organizations from tiny startups to multi-national corporations, nonprofit, governmental and academic institutions, as well as individuals, nations and even machines. And the results can be catastrophic — from damaged brands and competitive positions to CEO and senior management dismissals, tainted reputations, lawsuits and even jail.

Given the risk of viral ethics, a key question for decision-makers is how to minimize the potential for — and mitigate the effects of — the contagion of unethical behavior in their organizations.

Unethical behavior spreads unpredictably. Recent examples show how important this challenge can be. Uber’s CEO Travis Kalanick stepped down after the company fired about 20 employees for such widespread egregious misconduct that the board hired the former U.S. Attorney General Eric Holder to investigate. It wasn’t long after Volkswagen was accused of manipulating the emissions testing software that car makers from Mitsubishi to Mercedes-Benz became embroiled in their own investigations.

Though ethics contagion is impossible to prevent, the ability to predict, diagnose and cauterize the forces that drive contagion is critical to any effective corporate ethics strategy. When we misdiagnose or oversimplify (such as “overhauling corporate culture” or “tone from the top”), a whack-a-mole situation emerges where old behaviors resurface and mutate into new dangerous behaviors. Often this vicious and viral cycle results from excessive focus on the behaviors instead of the contagion.

I group these drivers roughly into three categories: classic, insidious and new. Technology plays a disproportionate role — both catalyzing the contagion and scattering the impact — making it harder to remedy the harm and prevent further spreading and mutating.

Classic examples include greed, fear, social or economic pressure, skewed performance incentives, silos, inadequate reporting mechanisms and failure to integrate ethics into recruiting. Insidious forces include a drive to perfectionism, arbitrariness in enforcing standards, a sense of (and sometimes actual) impunity and arrogance. New examples include social media, hacking, the dark web, civilian space travel and the rise in super-voting shares for tech founders.

There is no single ideal corporate process or structure, or one-stop inoculation. But there are steps that can significantly reduce risk, increase awareness throughout the company and help protect reputation when the inevitable ethics infractions occur.

Ethics and reputation risk management should be positive-spirited, efficient, innovation-friendly and tailored to the company’s strategy and financial model. I offer a series of suggestions that can help accomplish these goals. While by no means exhaustive, these suggestions support both legal compliance and ethics above and beyond the law. Most can be integrated into existing management and governance processes for efficiency.

First, establish a contagion baseline. Map out the forces driving or tolerating contagious unethical behavior that are, or could be, present in the company. The more global and complex the business, the more work this initial step requires. But it’s a one-time investment that pays off with efficient annual monitoring.

This mapping includes identifying factors — an inexperienced CEO, founder super-voting rights, new consumer tech products or unique sales pressures — and considering how to eliminate or mitigate each of them. In some cases, elimination is critical: for example, eliminating impunity in the face of sexual harassment in a venture capital firm. In other cases, help alone suffices, such as implementing a whistleblower reporting system that goes to two recipients (e.g. the chair of the audit committee and a senior member of the management team) to protect the whistleblower and prompt action.

Second, integrate ethics into every step of the recruiting process. Explicitly state in job advertisements that a track record of the highest standards of ethical behavior is a criterion, integrate ethics into the interview, make sanction or termination a consequence of serious ethics transgressions in the employment contract (with legal advice) and include ethics in performance and compensation reviews. A productive job interview or performance review can ferret out risks, such as the interviewee’s inability to stand firm under pressure or lack of understanding of how technology affects ethics risk and reputation management.

Third, integrate ethics into product and service development at the earliest stage — conception and design, not delivery — particularly with consumer-facing technology. Even in “beta testing,” probing potential ethics challenges is possible — for example, the potential mental health consequences of Amazon Echo Look.

Fourth, run hypothetical ethics risk scenarios. For example, even if you limit liability through legal measures, ask what additional risk management steps you would take if this limit were removed. Similarly, companies can run regulatory response scenarios to ask how they could proactively buttress ethics efforts to minimize the chances of burdensome regulation and lead (rather than react to) the relationship with regulators. Uber, for example, could have proposed to the European authorities a blended model, treating drivers as part employee/part independent contractor rather than forcing a binary decision between the two by the European Court of Justice.

Fifth, establish an ethics risk register that lets management track execution of mitigation steps and allows the board to review the register annually. Integrate the baseline contagion map — not just examples of transgressions.

Sixth, beware in the tech world of the common ethics barometer, the so-called “The New York Times test,” that asks decision-makers to consider what the headlines might be if certain behaviors occur. Boards should consider potential headlines with 20/20 foresight. This is critical in part because ethics challenges that make headlines rarely seem as forgivingly cutting edge (the kind that the company “couldn’t and shouldn’t have known” about) when regulators and the public look back with 20/20 hindsight. In addition, younger tech-minded employees are often less sensitive to exposure of reputational failings in making decisions than boards might like.

Finally, one classic pillar of risk management still holds with innovation and technology: disclosure. Proper disclosure fairly shifts some risk from the company to the user — if, that is, the disclosure is clear and understandable. Think, for example, of “Smoking Kills” on cigarette packages versus the 13 or so pages of fine print legalese in some social media company’s Terms of Use.

Unfortunately, too many companies (somewhat understandably) limit disclosure to protect proprietary information. An alternative: at least tell the user that you are not disclosing potential risks; for example, how an algorithm like Amazon Echo Look decides how to rank your outfit.