With the rapid rise of Large Language Models (LLMs), such as OpenAI’s ChatGPT, and with concerns being raised about the potential of Artificial General Intelligence (AGI), we at diginomica have been asking the question: how do governments regulate for AI technologies when the outcomes of these tools are still yet unknown?
Apprehension around the pace of AI has even prompted some in the industry to call for a six month pause on future development, whilst legislators, citizens and companies figure out what its likely impact will be. Of course, some of those calling for the pause have a vested interest in playing catch up with the likes of OpenAI, but the sentiment of the discussion resonated with many observers that are fearful about where the technology is headed.
The reality is, as I’ve noted previously, that it’s incredibly hard to walk the line between AI regulation that encourages investment, curiosity and change, and AI regulation that protects people and organizations from unknown harms. Particularly when we don’t know a great deal about where development is headed, how the technology will be used and what the outcomes will be. Some nations and governments will go hard and fast, stifling the technology’s potential (and likely encouraging illegal uses), whilst others will be too laid back, assuming that ‘all will be well in the end’.
I don’t have the answers to those questions, but what I do propose is that alongside the regulation debate we should be actively encouraging governments, private companies and research institutions to be embarking on education and awareness campaigns right now. Whilst we don’t know the harms and unintended consequences that will come down the line, we are able to raise public interest and awareness in some of the pitfalls of AI that we are already seeing.
And this is possible if we learn from our previous experiences (and mistakes) during the rise of the Internet and social media. Users are far more adept now compared to five or ten years ago when it comes to fake profiles, sharing information online, the risks of disinformation and seeking out trusted sources.
But it took a long time for governments and organizations to take this education process seriously. In the UK, for instance, a large internet awareness campaign, in collaboration with the private sector, started in 2014 – arguably a decade too late, if you think about the fact that Facebook launched in 2004.
And even with those campaigns taking place, and better education in schools ,people were still unprepared for how false information was spread online during the Brexit referendum and during recent US elections.
AI – particularly generative AI – is going to make these ‘safety checks’ far harder, as AI becomes far more convincing. Are people prepared to engage with content online that is able to successfully mimic a human-like conversation, or are people going to be able to tell if a video or image is discernibly real or not, thanks to AI image generation? “Send me a picture of yourself right now to prove it’s you” could become much easier to fake in the near future, more so than it is currently – and people may not yet understand that that’s coming.
We need to educate ourselves
The challenge, of course, is that much like with regulation, we don’t yet know all the examples of how AI could possibly be used over the coming years – and we don’t know all of the potential negative consequences. However, that doesn’t mean an education and awareness process cannot start now.
The risks associated with implementing education practices and campaigns right now are far fewer than the risks associated with taking the wrong approach to regulation.
Government, the private sector and research institutions should be working together to think through some national campaign messages that could help citizens and users better prepare for how AI might present itself online, as well as some best practice approaches for how it could be used.
Some of the things that we should be collectively thinking about as we approach the wider use of AI, include:
-
What can we learn from recent history that could guide our thinking moving forward? For example, we have already been warned by multiple parties that generative AI is likely to make the spread of misinformation much easier – and will likely be more convincing thanks to the ability to fake photos, videos and other content. What are the signs of an AI-driven disinformation campaign and how can users be more wise to coordinated messaging?
-
Closely linked to the point above, generative AI can create very convincing images, will likely enable convincing fake videos, and will also see voice replication technology become more easily accessible. Verifying sources is not something that journalists should just be aware of, but anyone engaging with digital services too. The risk here isn’t just fraud, but there are far reaching implications for democracy, as well as for our personal lives. I don’t have an answer for how we tackle this, but I do believe at least making people aware of how this technology could present itself could be helpful.
-
We should also be educating people on the risks associated with using AI technology, particularly generative AI, in certain situations. For instance, we know students will use the likes of ChatGPT as part of their school or university education – but do students understand where the line is between helpful support from AI and falsifying work? Equally, do people in work know where the line is between using AI generated content and passing it as their own? Or do people know the harms and risks with regard to sharing fake AI generated content online? There are real legal and personal consequences for some of these things and we should be starting to think about educating people to be cautious and thoughtful in their approach.
My take
I have likely missed many examples of how awareness could be raised across populations and the risks we should be thinking about – but the underlying message is unchanged. Whilst we place huge emphasis on AI development and AI regulation, we also need to prioritize awareness and education. We’ve seen what happens when we just assume people understand how these technologies work and the negative consequences for society, economies and democracy – let’s not make the same mistake again.