Last year was the first full year of the UK Government’s AI Strategy being in play, a document that set out to grow the sector, boost GDP and investment, spread the benefits to everyone, protect national security, and encourage greater diversity in what is still a white-male-dominated business.
So, what are its key achievements to date? And is the strategy delivering on its aims?
According to the Office for AI, a number of internal enhancements were made to the UK market last year, in terms of policy and governance at least.
For example, in July the government published its first AI Action Plan, outlining what it saw as its key national priorities: investing in the long-term needs of the AI ecosystem, ensuring that AI benefits all sectors and regions, and establishing effective governance of the technology.
It also published The Future of Compute Review, and an AI regulation policy paper, which prompted a full range of stakeholder feedback.
Meanwhile, the UK’s new AI rulebook proposed a “pro-innovation framework” for regulating the technology, echoing other Whitehall announcements about the preferred role of regulators in the post-Brexit world.
For this UK administration,, the stated focus is always on enabling growth and innovation – as the country enters what many believe will be a protracted recession – rather than protecting citizens from private-sector overreach. The latter will be more the remit of the new Digital Markets Unit (DMU) at the Competition and Markets Authority (CMA), though its focus seems limited to reining in the handful of Big Tech behemoths, such as Alphabet, Microsoft, Meta, Apple, and Amazon.
However, with a technology such as AI, which demands public confidence and consumer buy-in, that cross-governmental shift of focus may run counter to the critical need to build trust among both users and data subjects. While the public seems happy to embrace some aspects of AI –many enjoy playing with tools like ChatGPT or AI image-generators like Stable Diffusion and Midjourney – few commentators would argue that their trust in institutions or in the safe, ethical use of data is high.
On that point, the UK’s new Centre for Data Ethics and Innovation (CDEI) – the first organization of its type in the world – published its ‘Industry Temperature Check: Barriers and Enablers to AI Assurance’ report.
Speaking in 2022, CDEI Deputy Director Louise Sheridan explained that the ethical dimension of AI is more complex than isolated cases of poor data meeting bad algorithms in the service of tactical gains:
New forms of decision-making have surfaced numerous examples where algorithms have either entrenched or amplified historic biases, or indeed created new forms of bias or unfairness. Action-steps to anticipate risks and measure outcomes are needed to avoid this.
“But organizations often lack the information and ability to innovate with AI in ways that will win and retain public trust. This means that organisations often forego innovations that would otherwise be economically or socially beneficial.
So, the message is that users of the technology seem aware of the need to build trust among the public.
The investment side does too. In June, UKRI (UK Research and Innovation, the body that includes the EPSRC, Innovate UK, and seven other organisations) launched its own research programme on AI ethics and regulation in partnership with the Ada Lovelace Institute. The £8.5 million initiative aims to build public trust in AI and data-driven technologies “where it is merited” – UKRI’s words, not mine.
Also in 2022, the Alan Turing Institute launched the AI Standards Hub in collaboration with the BSI (the British Standards Institution) and the National Physical Laboratory (NPL, the UK’s national metrology institute). A worthy if naive move (given the might of Big Tech), supported by the Department for Digital, Culture, Media and Sport (DCMS).
The latter is already onto its seventh Secretary of State since 2016, and its third since the AI Strategy was announced in September 2021. This endless game of Whitehall musical chairs, which included three Prime Ministers last year, does the UK no favours. Policy and emphasis have changed from leader to leader, making real progress more challenging to achieve.
So, what about the all-important area of AI and data analysis skills? As Tabitha Goldstaub, Chair of the UK Artificial Intelligence Council, put it last Spring:
We don’t believe that the AI Strategy will work until we have a nation of conscious, confident consumers of the technology, as well as a diverse workforce capable of building AI that benefits not just the company, but also the economy and society as a whole. This means we need to have a population that can do three things: build AI, work with and alongside AI, and live and play with AI.
The Office for AI lists among the government’s achievements last year the £117 million it put towards 1,000 new PhDs in AI, and the £17 million invested in 2,000 new AI and data science conversion course scholarships. These are all well and good, but the focus on academia rather than upskilling the average data worker may not be the best approach.
A number of reports last year pinpointed the dearth of AI and data analysis skills in UK business. Meanwhile, an IBM research study found that only one-third of UK companies have accelerated their use of AI in the past two years, compared with a European average of 49%. More than one-third (36%) of UK companies stalled their AI investments during the survey period.
Another strand of the Strategy is defence of the realm and national security, where a number of recent announcements have been made.
For example, the Ministry of Defence (MoD) has launched the Defence AI Centre to accelerate the technology’s adoption across the armed forces, in accordance with the dedicated Defence AI Strategy. It has also published a policy statement,‘Ambitious, Safe, Responsible: Our approach to the delivery of AI-enabled capability in Defence’.
Confusingly, the Defence Science and Technology Laboratory (DSTL) opened the similarly named AI Research Centre for Defence last year, in partnership with the Alan Turing Institute.
Meanwhile, DSTL and the US Air Force Research Laboratory carried out the first deployment of the jointly developed AItoolbox in two military exercises that took place in the run-up to Christmas. The aim is to allow armed forces to rapidly select the best available AI tools for specific mission needs.
Elsewhere, in the environmental sector, there has been a £1.5 million contribution from the Department for Business, Energy and Industrial Strategy (BEIS) towards the AI for Decarbonisation programme, to help speed the development of new technologies to reduce emissions, plus £1.2 million towards the Net Zero Data Space for AI Applications.
Finally, in healthcare, stroke patients are among those benefitting from quicker treatment and improved outcomes with AI that aids diagnosis and helps determine the best treatment, according to an announcement from the Government, which stated:
Early-stage analysis of the technology, which received funding from the first round of the Government’s AI in Health and Care Awards, shows it can reduce the time between presenting with a stroke and treatment by more than 60 minutes, and is associated with a tripling in the number of stroke patients recovering with no or only slight disability – defined as achieving functional independence – from 16% to 48%.
Cancer care is also receiving a boost from the technology, which can speed up the process of looking at scans to detect growths and lesions.
A mixed bag of progress in the Strategy’s first year: some worthy initiatives and bold aims, but many achievements are in administration and high-end institutional awareness rather than speeding deployment to solve real-world problems in business. Feathers in caps and window dressing? Perhaps. So, the real test will be in GDP and economic growth: things that can be measured rather than merely talked about.