Throughout the recent political turmoil in Downing Street, the civil service has continued to develop policy on artificial intelligence, including by developing its 2021 National AI Strategy through a more detailed policy paper (titled ‘Establishing a pro-innovation approach to regulating AI’) that gives further guidance about the intended approach. This policy paper is of domestic and international importance, as the new Prime Minister Rishi Sunak looks to capitalise on the UK’s AI potential at a time when governments worldwide grapple with how to regulate the emerging technology.
Notably, the tone of the paper, as introduced by then-ministers Nadine Dorres and Kwasi Kwarteng, emphasises a ‘proportionate, light-touch and forward-looking’ attitude that looks to ‘unleash growth and innovation’ in the field. The framing of an AI policy as light touch should raise immediate concerns, as it potentially signals an underestimation of the severity and the scale of the risks posed by the technology. This matters both to ensuring a desirable future, and because algorithms are already affecting people’s everyday lives. Ironically, the UK government should be more than aware of these affects, given the public outcry about the biased algorithms used to award A-Levels in 2021 and their backtrack about the Home Office’s use of an allegedly racist algorithm to process visa applications.
Substantively, the policy paper is also interesting because of its sectoral approach. Rather than setting up one central regulator, the UK would address the issues raised by AI within their particular industry, for example in finance or education. If implemented properly, this approach could provide a useful example of how to deal with some of the wide-ranging challenges raised by technology.
Why is this policy paper important?
As Rishi Sunak begins another administration in Downing Street, journalists’ tendency to focus on his immediate pressures can crowd out any discussion of longer-term concerns. Yet, his government will preside over a crucial moment for UK technology policy, as the country attempts to define a post-Brexit approach distinct from the European Union. The UK’s steps should be of international interest, too, given the country’s global importance in AI development. Its universities still attract talent from around the world, and its membership in intergovernmental organisations allows the UK to have an important influence beyond its borders.
Given his actions as Chancellor, there is also reason to think that Sunak may play an active part in technology regulation. Earlier this year, he promoted a ‘forward-looking approach’ to the blockchain as part of a broader ‘ambition to make the UK a global hub for crypto asset technology’. In April, he even asked the Royal Mint to pilot support for a (since shelved) non-fungible token. A focus on emerging technologies may also allow Sunak to revive some of the pro-growth impetus of his predecessor, serving as a politically convenient counterbalance to the return of austerity rhetoric. A taste of this can be found in last year’s Conservative party conference speech, where an announcement about AI scholarships was said to ensure the country’s status as a ‘world leader’ in AI and as ‘a sign of our ambition for the future’. Though this ambition is admirable, Sunak’s government should also recognise that the unique threats posed by AI make its proper regulation particularly important in seeing any desirable future to fruition.
Is a light touch approach necessarily pro-innovation?
The primary danger of a light touch approach fixated on the short-term economic potential of AI is that it may be reluctant to give regulators enough power to effectively deal with its harms. Algorithmic decision-making is playing an important role in modern life, with prominent troubling examples including photo recognition software seemingly discriminating against people of colour and hotel recommendation engines showing Mac users higher prices. Most generously, the policy paper’s framing might come from naivety about these issues. At worst, the government’s rhetoric could be seen as deliberately incentivising unethical development, implicitly framing the UK as a safe space to try out new techniques and approaches free from the regulatory oversight present in the EU.
The presumed response of the government to this point would be to highlight the downsides of burdening businesses with more bureaucratic regulations. The policy paper warns of the dangers of ‘placing unnecessary barriers in the way’ and claims that their approach ‘drives new investment’. However, authors in the ethical AI space have more recently begun to challenge the assumption that thinking about ethics is necessarily anti-business. There are cases in which an AI acting unethically causes economic harm, such as when a screening AI that analyses job interviews for signs of ‘employability’ is biased against certain applicants. In such cases, there is nothing innovative about choosing the wrong candidate, where a better choice is available on merit and where their hiring would be of overall benefit to the economy.
Furthermore, at a societal level, unregulated AI can exacerbate larger negative trends that harm business. In his 2019 book Human Compatible, Stuart Russell highlighted that social media algorithms often end up looking to maximise user engagement (rather than intellectual curiosity or user satisfaction), leading to the spread of extreme political views. Platforms like YouTube and TikTok have since been linked to cases of violence and extremism, and, longer-term, could sow division and anger that threatens the UK’s democracy and political stability. In addition, the amount of energy used by large AI models can harm the environment. Even if light touch regulation might seem initially appealing, trends like growing division and accelerated climate change could end up making the UK less attractive for investment.
How can future regulation promote ethical AI innovation?
Part of the reason the policy paper remains interesting, however, is that its sectoral approach has the potential to allow regulation to be more agile in responding to problems in individual areas. There are unique challenges raised by AI’s use in healthcare (such as the implications of greater automation for the doctor-patient relationship) that domain experts could tackle most effectively, for example.
However, the potential upsides of utilising the expertise of multiple regulators can only be realised if they can work together effectively. As the policy paper moves towards implementation, it is important that existing bodies, like the Information Commissioners Office and the Financial Standards Authority, are empowered to address the problems posed. The fast pace of innovation in the field may require the government to develop its horizon-scanning abilities and explore a more fluid relationship with regulators. In addition, sectoral regulation needs to cohere without leaving gaps, and oversight bodies should be encouraged to share best practices, such as through the Digital Regulation Cooperation Forum. Though these more technical and less stirring issues are unlikely to be anchoring party conference speeches anytime soon, they are the ones that Sunak’s government should look to get right, to help develop AI policy that truly promotes innovation for all.