COVID-19 is accelerating digital transformation. Mitigating bias in AI has never been more important.

The world is facing massive disruption as COVID-19 continues to deeply and severely impact people, communities, and economies in all corners of the world. This is a before-and-after moment in the history of the economy and its digital transformation. Big technology platforms producing AI systems, that have been targets of politicians and regulators, are now good guys at the center of efforts to fight coronavirus. We’re also seeing greater reliance on digital technologies and the acceleration of previously slow-moving technology trends — particularly machine learning AI systems — that are likely to shape the future as we know it.

AI has transformative potential, but can also replicate, solidify, and amplify biases — with the potential to deepen social inequities and enhance intractable risks to businesses and society more broadly. What do you need to know, and what actions can equity fluent leaders take?

Why AI systems are biased

AI models make it possible to automate judgments that were previously made by individuals or teams of people. Using machine learning, AI systems make inferences from data about people and have already been increasingly used to make decisions affecting most of our lives. This includes who receives an interview for a job, whether someone will be offered credit, and which products are advertised to whom. Governments lean on these systems to plan services and allocate resources — such as what schools children will attend, which neighborhoods will be seen as “high risk” for crime, who gets approved for financial assistance, and more. They are also helping to fight against COVID-19 such as tracking and predicting the spread of COVID-19 and even directing prevention and clinical resources to those at-risk for mental health or substance use problems.

While exciting, AI systems can embed bias. Large scale AI systems are developed in a handful of tech companies and small set of elite university laboratories, spaces in the West that tend to be white, affluent, technically oriented and male. AI is not neutral and those who design, develop, and maintain AI systems will shape how such systems understand the world. At a more granular level, bias can be present in the generation, collection, and labeling / management of data as well as the design and operation of algorithms. Our forthcoming bias in AI framework breaks this down.

AI systems are being expedited

COVID-19 is accelerating digital transformation. In our day-to-day lives, this is evident in obvious ways, from increased teleconferencing and virtual events, to essential goods delivery. It is also occurring in less visible ways. Companies (large and small) are increasingly using chatbots and robots in the face of social distancing. From February to April, IBM saw a 40% increase in traffic to Watson Assistant, an AI-powered chatbot. Beyond chatbots swooping in at call centers, factories are turning to robots, and companies are using robots for other mundane tasks (e.g., robots are scrubbing Walmart floors). Even as businesses reopen, further rapid adoption of chatbots and robotics for various jobs is expected, with implications for individuals who held those jobs prior and economic inequality more broadly. Other economic crises have had similar trends: over the three recessions that have occurred over the past 30 years, the pace of automation increased during each. Business leaders at IBM anticipate adoption of AI in the corporate world to explode up to 90% in the next 18–24 months.

What keeps us up at night are the AI systems informing decisions around allocating resources, information, and opportunities. These are ones where bias impacts who gets the job interview or who gets financial assistance. In light of the pandemic, governments are facing significant challenges due to increased demand for services, and as highlighted in a McKinsey Center for Government Insight series, consideration is being placed on increasing automation of internal government processes to free up resources.

The trends are unmistakable — and probably irreversible.

What’s next?

COVID-19 may be killing the big-tech backlash. Governments are turning to AI systems for emergency response while initiatives for new AI-based technologies are poised to speed up to mitigate the crippling impacts COVID-19 is having on the economy and government budgets. They might help the expected low GDP growth and productivity in the coming years and speed recovery. This would mark a key pivot in tech policy of the European Union and other Western nations. But at what cost?

It is important to take pause — particularly around AI systems making and informing decisions, which are riddled with biases. AI systems’ outputs are not objective truths but rather human creations embedded with human values and decisions, using historical and current data that exists in an inequitable world. Thorough understanding of different forms of biases is largely missing, while what is considered “fair” is often not clear. Without appropriate actions now, we are headed towards a future that is ultimately stuck in the past while exacerbating and legitimizing societal inequities.

AI has undeniable, transformative potential — but understanding and addressing bias is imperative.

What actions can public and private leaders who are developing, managing and/or using AI systems take? As robots and chatbots do take on more jobs, reskilling and upskilling workers is key to ensure that individuals can access new jobs, for starters. Specifically related to mitigating bias in certain AI systems, we highlight 4 key actions (our forthcoming playbook for leaders on mitigating bias in AI goes into greater detail):

  1. Insist on diverse, multi-disciplinary teams in the design, development, and management of AI systems. This includes diverse individuals (diversity in terms of race, ethnicity, gender, socio-economic status, ability, etc.) and team composition to include social scientists, philosophers, domain experts, and community leaders alongside data scientists and engineers.
  2. Be curious and insist on transparency and explainability. This includes getting clarity on what is in the datasets used to train algorithms, and what historical or current inequities could be embedded in the data. Bias doesn’t always manifest in the data alone — but in the design, development, and management of algorithms too.
  3. Don’t take the outputs of AI systems as “truths.” Humans have biases towards trusting technical systems, but these systems need to be held in check. This includes conducting audits for technical and non-technical forms of bias, and developing human-in-the-loop systems.
  4. Prioritize equity to unlock value responsibly. Equity needs to be a priority, not just growth and efficiency. If we have learned one thing from recent history in Silicon Valley, it’s that moving fast will break things. And in this case, the “who” and “what” that gets broken will likely be lives of already marginalized populations, further exacerbating inequalities.

Interested in how to unlock value responsibly and equitably in AI? Sign up for our playbook to mitigate bias in AI launching next month.

At the heart of UC Berkeley's Business School, the Center for Equity, Gender, and Leadership educates equity-fluent leaders to ignite and accelerate change.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store