Politicians and nonprofits will struggle to keep AI in check—but corporate boards can’t afford to fail



GettyImages 1981686987 e1715166531443

In a rapidly evolving landscape, organizations of all kinds—public, private, commercial, nonprofit—are embarking on an exhilarating journey into the world of generative artificial intelligence (AI). While some enterprises had already developed traditional AI systems (pre-generative AI), principally relying on machine learning predictions fueled by structured data, many are now navigating uncharted AI territory and the possibilities are endless.

We’ve been mesmerized by emerging technology before, from the blockchain revolution and the allure of the metaverse to the frenzy surrounding NFTs, but generative AI is a new pinnacle of technological innovation. The vast spectrum of applications is truly breathtaking, ranging from automating business processes to unleashing AI for profound societal benefits.

However, amid this great promise lurks great peril. Much has been written about the risks of AI–from the possibility of discriminatory outcomes to more existential threats to humanity. The question that looms large is: Who will assume the pivotal responsibility of safeguarding humanity in this age of AI?

A failure to act

While several NGOs and civil society organizations diligently analyze AI risks, the vast economic commercial potential makes it challenging for non profits to restrain AI development. Policymakers too are taking steps to address this AI tidal wave, as evidenced by the recent adoption of the EU AI Act by the European Parliament.

However, global AI regulation today is immature, and the prospect of heavy global AI regulation remains uncertain.

Sometimes, government leaders choose not to regulate innovative technology. In 1997, the Bill Clinton administration issued a seminal report, titled A Framework for Global Electronic Commerce, advocating for industry self-regulation without undue restrictions on electronic commerce. A year later, President Clinton signed the Internet Tax Freedom Act, promoting innovation by allowing the internet to flourish with limited taxation.

Similarly, the Chinese government strategically decided not to heavily regulate FinTech innovation and the rise of the super-apps for a significant period in the technology development.

If NGOs and governments aren’t assuming the role of stringent AI watchdogs, then we must look to corporations and their board leadership to seize the moment and assume the role of guardians of AI.

Boards serve a pivotal governance role overseeing a broad spectrum of organizational risks, including the intricate landscape of AI risks. The landmark 1996 Caremark decision by the Delaware Chancery Court established a foundational (albeit minimal) legal standard for board oversight. But can we rely on the legal system to ensure robust governance of corporate AI programs? Corporate boards are afforded considerable latitude in exercising their governance responsibilities. They need to establish an oversight system, monitor it, and act on any red flags that arise.

Establishing boundaries of ethical tolerance

Considering the distinctive ethical challenges posed by AI, board responsibility should transcend minimum legal requirements and embrace a profound ethical responsibility with pioneering AI governance. Boards can adopt ethical AI practices that safeguard stakeholder interests and humanity at large while also meeting the fiduciary duty to their shareholders. Not only can ethical behavior enhance a company’s reputation and trustworthiness, but it also lays the foundation for enduring success. I’m encouraged that many boards are going even further and promoting certain ethical AI actions that benefit society without direct evidence of a positive connection to their business.

The board must establish the boundaries of ethical tolerance within the enterprise, setting the limits for what is morally acceptable across the company’s AI initiatives. Ethics should serve as the guiding light illuminating every facet of an enterprise’s AI strategy. Here are five actions that boards should consider to promote ethical AI governance:

Advance the board’s technology expertise: Fill an open board seat with a technologist or AI expert. Embrace AI firsthand by directly engaging with this transformative technology; experiment with generative AI tools to enhance your board work. Invite in AI experts to provide fresh perspectives.

Elevate beyond legal compliance: Remember that Caremark represents only the minimum oversight standards. Go further and elevate AI governance beyond legal compliance, whether it’s a critical component of corporate social responsibility, an expression of ESG principles or other manifestation of a duty to society. Your company’s AI applications must not only conform to positive law but also uphold fundamental human rights, recognizing our duty to natural law.

Form an ethics council: Establish an ethics council comprised of experts versed in ethics, anthropology, technology, data, the law, and human rights. Leverage this multidisciplinary council to rigorously evaluate enterprise AI applications, providing a fresh perspective on ethical considerations. Diversity is key in a proper AI ethics analysis.

Establish a board technology committee or advisory board: A preview of EY research reveals that 13% of S&P 500 companies have instituted some form of a board-level technology committee. These committees have proven invaluable in effectively managing technology risks and steering the innovation and growth agenda fueled by technology.

Foster collaboration within the AI ecosystem: Boards should ensure the enterprise is effectively collaborating within the AI ecosystem. Engage with industry actors, policymakers, and ethicists to collectively establish ethical AI standards that reflect societal values.

Corporate boards stand at the vanguard of ethical AI governance. They’re uniquely positioned to safeguard humanity by ensuring AI is designed, developed, and deployed ethically and responsibly. Boards hold the compass that can guide us toward a future where AI is harnessed not only for its vast potential but also for its integrity. They bear the heavy responsibility of ensuring that AI is not just for economic gain, but applied ethically, responsibly, and with unwavering dedication to our shared values.

As we navigate the complex waters of AI’s revolution, let us not merely meet our legal obligations but transcend them. Let us infuse ethics into every line of software code, every decision, and every action. Let us cultivate a legacy of responsible innovation that serves as a beacon for generations to come and safeguards the dignity and rights of all. Let us embark on this noble journey together, for the future of AI, and indeed humanity, hinges on the choices we make today.

Jeffrey Saviano is an AI ethicist. He holds an appointment in the Edmond & Lily Safra Center for Ethics at Harvard University, where he is a member of the GETTING Plurality Research Network and collaborates with the Harvard community to study AI ethics. Jeffrey is also a Senior Fellow and Research Affiliate at MIT Connection Science, a Lecturer at Boston University School of Law, the EY Emerging Technology Strategy & Governance Leader, and the AI Leader within the EY Center for Board Matters. The views reflected in this article are the views of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

About The Author

Scroll to Top