All AI eyes are supposed to be on Bletchley Park in the UK this week, but no-one seems to have told that to the US or the European Commission.
In Washington, President Joe Biden – who’s not attending the UK-hosted AI Safety Summit – has signed off on a tough Executive Order compelling AI developers to share safety test results with the US Government when training an AI model that poses a potential “serious risk to national security, national economic security, or national public health and safety.”
The National Institute of Standards and Technology will create standards to ensure AI tools are safe and secure before public release, while the Commerce Department will issue guidance with labeling and watermarking AI-generated content to help users to differentiate between authentic interactions and those generated by software.
As per the official announcement, the EO:
- Requires that developers of the most powerful AI systems share their safety test results and other critical information with the US. government.
- Develops standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
- Protects against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.
- Protects Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
- Establishes an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Orders the development of a National Security Memorandum that directs further actions on AI and security.
Privacy considerations are also built in to the Order, calling for a number of actions:
- Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.
- Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development.
- Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks.
- Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
The Administration has been quick to position this latest move as being complementary to other national and international efforts underway around the world:
The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
The EO attracted enterprise tech approval on social media. Salesforce CEO Marc Benioff commented:
Today’s AI Executive Order is a giant leap towards ethical AI, government integration, & attracting great AI talent to the US! We pledge to ONLY use AI datasets that respect trust, privacy, & copyright. Your data will NEVER be our product. Trust is our NORTH STAR as we navigate the AI landscape.
Box founder and CEO Aaron Levie said:
Biden’s AI Executive Order is the gold standard for how governments should be regulating AI right now. Thoughtful but scoped oversight focused on practical risks, emphasis on privacy and security, focus on R&D across the ecosystem, and encouraging use of AI in the government.
Meanwhile the G7 group of industrialized nations signed off on a set of international guiding principles and a code of conduct aimed at guiding companies in the development of AI technologies.
The 11-point code “aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems”.
The G7 Hiroshima Artificial Intelligence Process was established at the G7 Summit on 19 May 2023 to promote guardrails for advanced AI systems on a global level. The initiative is part of a wider range of international discussions on guardrails for AI, including at the OECD, the Global Partnership on Artificial Intelligence (GPAI) and in the context of the EU-U.S. Trade and Technology Council and the EU’s Digital Partnerships.
The Guiding Principles are:
- Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
- Patterns of misuse, after deployment including placement on the market.
- Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.
- Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.
- Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems.
- Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
- Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
- Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
- Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.
- Advance the development of and, where appropriate, adoption of international technical standards.
- Implement appropriate data input measures and protections for personal data and intellectual property.
European Commission President Ursula von der Leyen was quick to emphasize the role of Europe in regulating AI:
The potential benefits of Artificial Intelligence for citizens and the economy are huge. However, the acceleration in the capacity of AI also brings new challenges. Already a regulatory frontrunner with the AI Act, the EU is also contributing to AI guardrails and governance at global level. I am pleased to welcome the G7 international Guiding Principles and the voluntary Code of Conduct, reflecting EU values to promote trustworthy AI. I call on AI developers to sign and implement this Code of Conduct as soon as possible.”
It’s going to be a very busy week when it comes to jostling for AI leadership positions. But then the great thing about a standard is that there are so many to choose from. Still, we shouldn’t complain – at least there is a recognition of the need for urgent action in this field.