Artificial intelligence (AI) is typically portrayed in two diametrically opposite ways; it is either the great hope for business technology, or AI is an existential threat to the way we live. As is often the case, the middle way is closer to the truth. Other than some mooted regulations from the European Union, AI, just like social media, is growing so rapidly that the technology industry itself will need to pioneer the ethical standards that enable AI but also protect civil liberties.
With its research and development history, Intel has been one of the first big tech firms to define the foundations of ethical AI. Launched last month, the Intel Responsible AI charter sets out the chip giant’s perspective on responsible AI and the four pillars the firm says the technology industry can use to ensure AI succeeds not only for technologists but also for wider society. Lama Nachman, Intel Fellow and Director of Intelligent Systems Research Lab at Intel Labs, led the development of the Responsible AI charter and says:
We see the unbelievable potential of AI. AI is really transforming what we are able to do in drug discovery and climate change, for example. However, if you don’t address the risks of AI, then you are not enabling the potential.
Within the Responsible AI charter, Intel says the reasons behind the project are:
Despite AI’s many real-life benefits, Hollywood loves to tell alarming stories of AI taking on a mind of its own and menacing people. These science fiction scenarios can distract us from the very real but more banal ways in which poorly designed AI systems can harm people.
It is critical that we continuously strive to responsibly develop AI technologies so that our efforts do not marginalize people, use data in unethical ways or discriminate against different populations – especially individuals in traditionally underrepresented groups. These are problems that we as developers of AI systems are aware of and are working to prevent.
Clearly a call to arms to the entire industry, and as a firm best known for its chips, Intel is possibly well positioned to bring a broad technology community together. Nachman says:
We take our ethical reputation very seriously at Intel. We all must lean into this issue and address it together; after all, it takes a village to raise a child.
Work began in 2017, when Nachman and peers at Intel Labs saw major developments in deep learning but also rising levels of concern. She says:
We wanted to understand what this meant for AI and for Intel. We did a lot of ethnographic research (the scientific description of peoples and cultures with their customs, habits, and mutual differences) and worked with third-party collaborators in different technology companies, and academia and funded other areas of AI ethics research.
With the full support of the senior leadership team, Nachman and her team were given the mandate to develop the Responsible AI charter. Nachman says this charter is benefiting Intel:
The depth of understanding we have helps us make nuanced decisions about where you put your resources in terms of AI ethics.
Nachman describes this as an obvious development for Intel, which has had Global Human Rights Principles in place since 2009. She says:
We had to look at how we moved the whole ecosystem in terms of AI so that it was consistent with our policy on human rights.
This research led to the creation of the Responsible AI framework. Nachman says of the framework:
It is a first step in setting the tone of what the North Star for responsible AI should be and defines the principles of where we want to get to.
The framework has four pillars: Internal and External Governance; Research and Collaboration; Products and Solutions, as well as Inclusive AI.
Internal and External Governance led to the creation of an Advisory Council to ensure that AI developments at Intel meet the Intel principles on human rights; human oversight, explainable AI, security, safety, reliability, privacy and inclusion.
Research and Collaboration commit Intel to continued research into privacy, security, human and AI collaboration; trust in the media; AI sustainability, explainability and transparency. In effect, this ensures that the Responsible AI charter and AI ethics across Intel do not become a ‘done once project’ and ensures that Intel is aware that AI ethics continually evolve. Nachman says of the third pillar, Products and Solutions:
We wanted to make sure that if we put something out into the market and then find it is used for human rights violations, for example, then we will stop shipping that technology to that customer.
The final pillar, Inclusive AI, deals with the already prevalent fear that there is bias in AI. Nachman says:
Intention and consequence are two very different things, and there are often unintended consequences.
This was most clearly seen in the AI-driven recruitment technology deployed by online retailer Amazon, which had to be withdrawn in 2018.
In creating the charter, Intel has also set up the Responsible AI Advisory Council, whose role it is to:
Review product and project development with our ethical impact assessment through the lens of six key areas: human rights; human oversight; explainable use of AI; security, safety and reliability; personal privacy; and equity and inclusion. The goal is to assess potential ethical risks within AI projects and mitigate those risks as early as possible. Council members also provide training, feedback and support to the development teams to ensure consistency and compliance across Intel.
We learned that we needed an Advisory Council, which spans the company and sits at the corporate level.
We took an incremental approach to setting up the Advisory Council so that we deployed and learned as we went along.
Nachman expects there to be a similarly iterative approach to Intel learning to live with the Advisory Council and its demands on how the organization delivers ethical AI. She says:
There will be a push to make this a checklist, but actually, it is a really thorough process, and the engagement will lead you to elevate the capabilities of the technology and responsibility. So the second time a team goes to the Advisory Council, their reports or research will be much more thought through. That is only possible when you are constantly thinking these challenges through.
Nachman joined Intel for the second time in her career in 2003 and became Director of Human & AI Systems Research Lab in 2017, a role she describes as:
What we are trying to do is amplify human potential with AI.
History is littered with unintended consequences, especially when it comes to technological developments. Perhaps if Carl Benz had thought of where the CO2 and particulates that poured forth from his automotive invention would go and how desirable the product would be, our environment would be in a less perilous place.
Technology always outpaces regulators, and therefore, it falls to technology creators to shoulder some burden and responsibility. CIOs that will become customers of Intel AI have a responsibility to their teams, but all too often, technology suppliers fail to spot where they have a responsibility.
With this framework, Intel is both demonstrating responsibilities, but also good business sense.
Intel is also protecting itself. Nachman revealed to me that a partner in the framework’s development, Article One, carried out a bad headline workshop, which enabled her team and Intel to begin to understand how a rogue AI technology or unethical usage could damage their business.
As Nachman says, AI has huge potential to accelerate the development of solutions to the major problems of our age, such as climate change, but only if the technology is trusted. By publishing the Responsible AI charter online, Intel is both showing trust and providing business technology leaders with an initial framework they, too, can use to guarantee trust.