Responsible Artificial Intelligence (AI) for entrepreneurs: What does it mean and where to begin?
By Genevieve Smith & Ishita Rustagi
In mid-November, Sam Altman, CEO of OpenAI, the company that owns ChatGPT, was briefly ousted by its board over concerns about commercialization being prioritized over safety and responsibility. What followed was a dramatic unfolding involving a job offer at Microsoft to lead a new research lab, his co-founder quitting in solidarity, and a letter from nearly 700 OpenAI employees threatening to follow him if he wasn’t reinstated. Soon after, he was reinstated alongside negotiations aimed at ensuring the board would have more oversight over his actions and the ouster of the board members who ousted him over ethical concerns. The chaotic week brought one particular topic top of mind: responsible AI.
ChatGPT, an AI-powered natural language processing tool, took the world by storm in November 2022 and currently serves over 100 million weekly active users. Tech companies around the world raced to launch their own generative AI tools, while others hungrily sought to integrate ChatGPT and other generative AI tools. Now, generative AI startups are proliferating like wildfire. Venture capital investment in generative AI firms across the globe in the first half of 2023 amounted to $15.2 billion, representing a fivefold increase from the same period the previous year. While exciting, concerns remain (as reflected by the recent OpenAI drama) and the US Federal Trade Commission opened an investigation into OpenAI in July about its handling of personal data and the potential to give users inaccurate information, as well as risks of harm to consumers.
Entrepreneurs are increasingly developing and leveraging AI technologies — particularly generative AI — for supporting access to healthcare, enabling access to finance for the underbanked, adapting learning content for rural students and more. Importantly, AI tools can create immense opportunities for society, but that doesn’t leave them immune from advancing inequalities and economic risks or harm.
For entrepreneurs utilizing AI, we ask: What does responsible AI look like in practice and what do leaders need to know? This article draws from our award-nominated 2022 California Management Review case study on Responsible AI: Tackling Tech’s Largest Global Governance Challenges to shed light on responsible AI for entrepreneurs.
Defining responsible AI & why businesses should care
While there isn’t one universal definition of responsible AI, definitions tend to align around responsible AI being the practice of designing, developing, and deploying AI tools that are safe, fair, and trustworthy.
Responsible AI is important not only to ensure that entrepreneurs fulfill promises of social impact, but also to secure business benefits. The Economist Intelligence Unit’s 2020 executive survey found that 90% of respondents agreed that initial costs associated with responsible AI were far outweighed by potential long-term benefits and cost saving, and 97% of respondents considered ethical AI critical for innovation. More specifically, responsible AI approaches can reduce risk, engender trust, enhance adoption, and accrue financial benefits.
From a risk perspective, when ethical issues or biased outcomes in AI are discovered and publicized, organizations face consequences of negative media coverage and damaged consumer and employee trust that can impact the bottom line. Relatedly, responsible AI approaches are important to stay ahead of forthcoming legislation. The OECD currently records over 800 policy initiatives from across the globe. In particular, the EU’s AI Act, a proposed law that seeks to assess the risk factors of algorithms being developed across industries, has the potential to become a global standard. Startups embedding good practices around responsible AI from the offset can avoid larger costs of retroactively assessing their algorithms once regulations are implemented.
For entrepreneurs, responsible AI approaches can be an important differentiator for investment and funding opportunities. In addition to assuaging investors, venture capital funds — such as Salesforce Ventures, launched in 2023 — are emerging with a focus on funding innovative and ethical AI startups. Multilateral organizations and donors are also focusing on advancing “AI for good” responsibly such as the USAID Equitable AI Grant Challenge or Gates Foundation equitable AI and large language model initiative.
Ultimately, startups have the opportunity to leverage what is currently known around responsible AI from the get-go to better fulfill their missions and gain important business benefits.
Responsible AI principles inform action
In response to a clear need for responsible AI, many large tech companies, governments, NGOs, and other organizations are developing responsible and ethical AI principles, particularly related to fairness and justice, transparency, and privacy. AI principles, often the first step to institutionalizing responsibility, can inform new strategies and initiatives, impact employee behavior, result in the adoption of new internal governance approaches such as review processes, and provide assurances to external stakeholders.
While startups don’t have the same resources as large tech firms to invest in responsible AI teams, they can still learn from their practices to institutionalize and embed responsibility. For example, Google launched a set of seven principles for responsible AI in 2018. Google subsequently established a Responsible Innovation team to lead their implementation by conducting review processes to assess new and existing products against the AI principles, while also conducting trainings and sharing tools to support responsible AI approaches. Microsoft’s Office of Responsible AI, established in 2019, sets policies and governance processes while coordinating efforts across the company. It includes a research arm to advance responsible AI principles, helps engineering teams implement responsible AI practices, and coordinates employees called “Champs” who are tasked with bringing ethics to different teams.
What are the challenges to operationalizing responsible AI?
There are important challenges and tensions that exist to inform ethical decision-making around algorithms. At a high level, there is a lack of diversity among those designing ethical AI principles, as well as a lack of diversity in teams developing AI tools. This matters as AI tools reflect the priorities and perspectives of those who develop and manage them.
At a more granular level, challenges include:
- Short-term corporate priorities can be at odds with ethics. The tech industry is fast-paced, prizing innovation and disruption. An ethical approach can require slowing down and therefore be considered at odds with being first-to-market–it may also sometimes block features or entire products.
- Formal guidance to operationalize principles is lacking. Many companies with public commitments to responsible AI have not walked the walk. Uncertainty around roles, responsibilities, and processes regarding responsible innovation remains, alongside a lack of incentives and connection to performance goals. As a result, principles have not often been translated into day-to-day practice, and it is hard to know when principles are violated. Companies also contend with the fact that even when a violation is known, accountability can remain lacking. At Microsoft, for example, teams still struggle with the open-endedness of ethics and desire more concrete practices and tools. The company is working on this and helping engineers better address and solve problems themselves.
- Prioritizing technical solutions can miss the big picture of how AI can deepen inequality. Most principles and associated guidelines suggest that technical solutions are required to solve ethical problems that arise in AI tool development, and in cases like harmful bias, focus on technical forms of bias (i.e. IBM Fairness 360). This focus perpetuates a mistaken concept that ethical challenges are “design flaws” that should be tackled largely by engineers as opposed to social scientists with an understanding of how AI can perpetuate existing discrimination and inequality.
- A focus on individual behavior without broader culture change is ineffective. There are often gaps between ethical intentions and ethical behavior in organizations. Ethics education and training programs to help people operationalize AI principles are important, but have limited impact without broader organizational, operational, and cultural change.
- There is a lack of clarity around concepts such as “fairness”. “Fairness” can mean different things in different contexts–in fact, definitions of “fairness” vary across social science, law, quantitative fields, and philosophy. Within AI, researchers and practitioners often default to using mathematical approaches to define fairness (e.g., matching some sort of criteria, such as equal or equitable allocation, representation, or error rates, for a particular task or problem). This can divert attention from underlying societal tensions by ignoring how some groups tend to experience advantages while others experience disadvantages. As a result, discrimination, oppression, and power dynamics may be unaccounted for. Product teams must define what fairness means and how to measure it in their specific product contexts.
- For startups, an additional challenge is related to resource constraints. Startups do not have the same level of resources to build comprehensive approaches to operationalizing responsible AI, and relatedly, may not prioritize positions or staff time dedicated to responsibility if it is seen as a “nice to have”. There can also be a misguided sense that responsibility is taken care of if the organization has a social impact mission. However, there are still strategies they can and should utilize to begin building a responsible AI approach from the get-go.
Strategies to Advance Responsible AI in Startups
While resources are lacking compared to large tech firms, startups can still learn from larger tech firms grappling with and advancing responsible AI.
Start with the following:
- Recognize that AI can reflect and reinforce social inequalities and power structures (even if it is “for good”). At a high level, entrepreneurs must come to terms with the fact that data and AI is not objective — but rather power plays an integral role and is embedded in the design, development, and management of AI technologies. Be transparent about limitations and challenges, then prioritize responsible AI as core to a sustainable organization.
- Develop responsible/ethical AI principles for the organization that inform how the organization uses and manages AI. Identify which principles are important for the organization, define them clearly, and communicate them to staff and investors alongside examples of how to operationalize them.
- Bring the principles to life through access to training opportunities, and encourage teams to utilize existing tools and resources. Regular training and learning development opportunities like a speaker series can cover topics related to responsible AI for the organization, such as understanding what bias in AI is and how to mitigate it, understanding fairness, and technical / non-technical approaches for operationalizing fairness. If hosting these speakers or trainings in house isn’t an option, support staff to attend trainings, events, or conferences that focus on such topics. Complement learning opportunities with actionable tools and make transparency the norm such as through using Dataset Nutrition Labels and Model Cards. If possible, having a dedicated role in the organization that supports responsible AI, ensuring the responsible AI principles are incorporated, and advising teams on tools to utilize, as well as overseeing training and development opportunities while staying up to date on key trends.
- Ensure responsible AI is communicated as an expectation by updating performance reviews to include a reflection of how the individual has done regarding the responsible AI principles. Relatedly, encourage and support teams to think about how to operationalize them. Check out this tool to update performance review processes and OKRs (objectives & key results). Also, make sure that organizational culture does not lead to retribution if people report ethical issues internally and that there are pathways for escalation.
AI startups are exploding, particularly those using generative AI. While exciting, there are immense opportunities that AI tools can open for addressing social and environmental concerns, so too can they embed and exacerbate inequalities and injustices. We are still working to understand the full range of ethical considerations and issues related to AI, and particularly gen AI. Meanwhile, AI policy is knocking at the door. Entrepreneurs have an opportunity to learn from the lessons of larger organizations and do the extra — but essential — work to ensure that AI tools are trustworthy and sustainable. By doing so, they may reap the longer-term business and social benefits.