It has been a week of big AI events and announcements. We've seen:
- An Executive Order from the White House, outlining new initiatives to establish standards for AI safety and security.
- The AI Safety Summit, hosted in the United Kingdom, gathering global leaders (including from the United States, China, and the European Union) and executives from AI and technology giants to build solidarity around the need to address the risks introduced by AI development.
- A speech from Vice President Kamala Harris, announcing more AI-related initiatives from the United States, including the establishment of the United States AI Safety Institute.
The whole world is talking about AI, and for good reason. Recent groundbreaking advances in AI—particularly in generative AI—have opened the door to unprecedented possibilities for businesses and individuals. However, the adoption of AI technologies brings societal implications, the risk of misuse, data privacy concerns, and other challenges. As the use of AI gains momentum, ensuring the trustworthy and responsible development and usage of AI is paramount.
As a global technology incubation leader, Outshift is here to help shape the conversation around trustworthy and responsible AI. In this article, we’ll discuss what trustworthy and responsible AI is, and then we’ll talk about what we’re doing at Outshift to move AI usage and governance in the right direction.
What is trustworthy and responsible AI?
Responsible AI ensures that the development and use of AI are pursued ethically, with transparency and accountability. As AI/ML permeates our world, the societal impact and implications will only increase. If not approached responsibly, AI can have adverse effects, such as the exacerbation of inequalities or the enablement of disinformation.
However, with the recent surge of generative AI, responsible AI is not enough. AI must also be trustworthy. In a technology landscape now punctuated by deepfakes and doxxing, trustworthy AI ensures that we approach AI fully aware of the concerns around privacy, authenticity, and attribution. When AI is trustworthy, users can leverage AI with confidence in its security.
As we’re seeing in these recent announcements and events, the world has yet to arrive at a consensus regarding the prudent approach to AI development and usage. Promoting a clear understanding of trustworthy and responsible AI is vital to steering the direction of its future, and that’s what Outshift is focused on.
Pioneering the discussions
The AI Governance Alliance is an initiative of the World Economic Forum's Centre for the Fourth Industrial Revolution (C4IR), uniting “industry leaders, governments, academic institutions, and civil society organizations” to ensure “that the potential of AI is harnessed for the betterment of society while upholding ethical considerations and inclusivity at every step.” Stemming from AI Governance Alliance was the Responsible AI Leadership summit in April 2023. Outshift’s Stephen Augustus (Head of Open Source at Cisco) contributed to the summit’s final summary of recommendations: The Presidio Recommendations on Responsible Generative AI.
A few highlights from this important document include:
- Employ diverse red teams: What does this mean? Red teaming involves critical testing of AI systems to identify weaknesses and vulnerabilities. This concept was also mentioned in the recent Executive Order. The Presidio recommendation emphasizes diversity in team structure, emphasizing the incorporation of “members from varied genders, backgrounds, experiences and perspectives for a more comprehensive critique.”
- Build a common registry of models, tools, benchmarks, and best practices: In order to facilitate collaboration between researchers and producers in the field, this recommendation pushes for a shared set of tools and guidelines for use in developing AI systems. This would also contribute toward accountability and transparency.
- Ensure content traceability: The process of producing results in GenAI systems is often considered opaque. This recommendation encourages developers to better trace how content is generated and document its provenance. This will increase transparency and “help users discern the difference between human-generated and AI-generated content.
As global leaders—in industry, government, and every sector of society—wrestle with the imperative for trustworthy and responsible AI, Outshift is at the table and shaping this discussion.
Moving forward
The recent Executive Order from the U.S. government puts forth the need for standard processes to “ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner.” This is a great first step. We see the value in this too, and our product teams at Outshift are working on initiatives to help enterprises secure their AI systems and infrastructure from malicious attack.
One recurring theme from the United Kingdom’s AI Safety Summit was the need to defend against the use of generative AI in large-scale disinformation campaigns. This speaks to both the issue of responsible AI—which includes promoting the equitable use of GenAI that is free of bias—and the issue of trustworthy AI—dealing with concerns around authenticity and attribution.
The development and usage of artificial intelligence will only gain traction, and the coming days will see it do so with acceleration. That’s why we have an urgent need for a posture that prioritizes trustworthy and responsible AI. With its roots in Cisco, Outshift has had a shared role in pioneering policy on responsible AI for the past several years. We aim to use this foundation to build better AI solutions, empowering organizations to use AI with integrity and confidence.
What’s next for us? In the days ahead, be on the lookout for more conversations on trustworthy and responsible AI with Vijoy Pandey (SVP and Head of Outshift) with Navrina Singh (Founder and CEO of SaaS Governance platform Credo AI, as well as a National AI Advisory Committee member) in collaboration with Venture Beat, a media company covering disruptive technology focused on AI. We’re also covering this topic from the point of view of Enterprises who need to scale responsibly, starting with the tough question: Why are they hesitating?