As regulators and legislators around the world assess their options when considering the impact of Artificial Intelligence (AI), trying to think through the myriad of unforeseen circumstances that could emerge as a result of the rapidly developing technology, one of the reassurances that’s commonly uttered by the ‘powers that be’ is that existing laws and protections should do a pretty good job of protecting individuals in most scenarios. Anti-discrimination laws, equal rights legislation, labor market rules and data protection regulations are cited as robust mechanisms for protecting against poorly designed AI systems.
This has been something so widely argued that, in fact, it’s something I’ve said myself previously on diginomica. But the Ada Lovelace Institute in the UK, a research organization that’s focused on ensuring the benefits of data and AI are justly and equitably distributed, has made a good argument for why that’s not the case. It’s important that we recognize the gaps in the current legal frameworks to ensure that when new legislation is created to protect against AI harms, we know what we are correcting for.
As part of an Ada Lovelace Institute online event, Alex Lawrence-Archer, a solicitor at AWO, explained a range of examples (UK specific) where AI could realistically be being used and how individuals would likely have limited protections against negative outcomes. This is particularly pertinent given that the UK’s current approach to AI regulations is not to regulate at all, but rather provide guidance and frameworks about what ‘good practice’ looks like. Lawrence-Archer said:
We’ve been looking at the extent to which current regulation provides a better protection against some theoretical, but quite realistic, near term harms. It’s certainly not the case that the government’s not going to do anything, but very much the case that the government’s not planning to introduce any new legislation.
And a key part of the rationale there is that AI harms are effectively covered by existing regulations and regulators. So we wanted to test that, by thinking through what would it actually look like if individuals are subjected to AI harms? Is that true? Are those harms covered by existing regulation?
Simply put, the answer is no.
Testing the rationale
Lawrence-Arched outlined three scenarios for us to consider. Scenarios that feel realistic in the context of AI’s current capabilities. These included:
- AI scoring of productive activity and availability with workers on zero hour contracts in a warehouse. This could mean workers getting terminated, having their amount of shifts reduced, or having their pay reduced if they have periods of absence – as based on algorithmic decisions. Equally, inferences could be made about prospective workers based on the extent to which they look like current workers. This scenario also would likely mean poorer working conditions, due to the oppressive nature of having your productive availability constantly monitored and analyzed.
-
A mortgage lender using an AI tool that biometrically classifies applicants for credit on the basis of their speech patterns. A tool of this nature might discriminate against people with certain accents, whether that be linked to ethnicity, region, or even speech differences as a result of a disability.
-
The Department for Work and Pensions introduces an advisory chatbot that tells people about their eligibility for welfare benefits (such as Universal Credit), where the chatbot gives incorrect advice and updates people’s records incorrectly.
Lawrence-Archer said that it’s true that there is a lot of existing regulation that would apply to these scenarios, even when considering AI harms – they aren’t entirely unregulated. For example, the DWP chatbot giving incorrect advice would certainly be in breach of Article 5(1)(d) of GDPR and would also likely see issues around assessments of high risk processing.
Equally, there may also be a legal basis for protection against automated decision making. For instance, if we assume that for the mortgage scenario that there is no rational relationship between someone’s accent and their actual creditworthiness, that’s likely to be unlawful and will fall under the Equality Act as well as FCA regulations. However, Lawrence-Archer added:
On the surface, you can quite quickly see that yes, it’s true that in many ways, these harms are regulated to an extent in the sense that what the decision makers in scenarios are doing is unlawful. However, something nearly being technically unlawful does not provide effective protection to individuals from AI harms.
And that’s because it’s not just the law itself that comes into play when seeking redress against these harms. He said:
You need a bunch of things to be in place for people to be effectively protected from these kinds of harms, particularly as they multiply and change. You need regulatory requirements about what controllers and decision makers are allowed to do. Those requirements need to be enforced by strong regulators who have powers and are willing to use them.
You need rights of redress, and importantly, you need realistic avenues in which to enforce those rights has to be realistic, ordinary people. And you need mandated meaningful and in context transparency. You need to know, and you need to have a chance to find out, that you’ve been affected by a harm. You don’t always know if you’ve been discriminated against.
And you need to not only know, but you need to be able to evidence the way in which you’ve been harmed, if you’ve got any chance of exercising the right to redress. We saw some gaps on all of these issues.
Significant gaps
One area that was identified as cause for concern, for instance, was around the enforcement of regulatory requirements. Take, for example, the zero hours productivity scoring scenario – there may be regulatory requirements in relation to that use of technology in that way, but in the UK there’s no dedicated regulator for employment or for recruitment. There are cross-cutting regulators, such as the Equality and Human Rights Commission and the ICO, but they face real limitations in relation to their resources. Lawrence-Archer said:
They’re kind of effectively required to regulate decision making across the whole of the society and economy. They face constraints in terms of the sources of information they have about controllers and decision makers compliance. And they’ve got limitations on their powers as well.
And there’s some evidence as well that, the ICO in particular, uses its powers less frequently than similar regulators in other European jurisdictions. So, there’s a gap there in terms of whether those unlawful uses of technology are going to actually be picked up and prevented in advance.
Equally, it may not necessarily be realistic for an individual to be able to enforce those rights. If you think about rights to redress that arise from GDPR, they have to be enforced in the civil courts. That’s not something that’s easy to do for an ordinary person. Lawrence-Archer said:
Often you might need advice, representation is costly. You might need to find that from a damages based agreement, for example. There’s a significant risk that you might be made to pay the other party’s costs if you lose. And it’s really slow.
So the fact that you can, in theory, enforce your GDPR rights in the civil courts isn’t necessarily helpful for an ordinary person in every circumstance, even in cases where you don’t have to go to the civil courts.
In the warehouse example, you might be taking a discrimination claim to the employment tribunal. Still in that scenario, you’re likely to need representation. And it’s really important to bear in mind that you might have a right to complain to the ICO in relation to GDPR breaches but that does not give you a right to redress. The ICO can’t order a controller to pay you compensation and actually the substantive requirements on the ICO in response to a complaint are very limited.
A lack of transparency
It’s also important to highlight that not all of the harms are protected against. Lawrence-Archer points to the biometric credit score assessing speech patterns as an example, pointing out that you’re likely to have much better protection if you’ve got an accent that’s linked to your ethnicity – as it’s likely to constitute indirect discrimination for the purposes of the equality act, given racial origin is a protected characteristic. However, you wouldn’t have the same protection if you were getting a low score because you had a regional accent. He added:
Similarly, if we look at the warehouse productivity example, if you’re an employee and you have two years continuous service, that brings with it a bunch of employment rights, which gives you much greater protection in the case of a determination by the algorithm, which many other people affected by that harm wouldn’t benefit from. So you can see some quite stark differences depending on individual circumstances.
However, Lawrence-Archer argues that most importantly, and this is uniform across all of the scenarios, that there are real limitations on the extent to which the law requires meaningful, and in context, transparency, when these harms might manifest. He explains:
The GDPR does require transparency and it’s often thought that that means you have a right to have an AI decision explained to you. But the GDPR does not, unfortunately, give you a right to an explanation of AI decisions.
And there are significant limitations to even the basic transparency requirements, where they involve disproportionate effort, for example, or they might affect the rights of others, which controllers have interpreted to mean being able to limit transparency if it might threaten their commercial property.
And so we really saw across the three scenarios – even in the scenario where there’s the best level of protection because there’s the Financial Conduct Authority – the lack of transparency being a real barrier to the likelihood of people finding out that something’s gone wrong and doing something about it. That was a major issue that we found.
This is a particular problem in that where a decision has been made using an AI tool, an individual might be even less likely to question it. So there’s a kind of a double issue there, that you’re maybe less likely to question it and you don’t necessarily have the rights that you need to find out when something’s gone wrong.
My take
This is very useful context – with helpful examples – to refute claims that we as individuals are protected from AI harms by current legislation. It’s clear that we are not. And that’s particularly true when there’s a lack of transparency around how AI decisions are made. For AI to be successful and useful for all, we need protections – and we need them quickly.