The need for legal clarity on use of artificial intelligence
Tuesday 16th February 2021
The rise of artificial intelligence continues to pose questions about protection and privacy, from facial recognition to personal data concerns over AI-fuelled algorithms. Our digital and technology expert Ryan Gracey examines why the law needs to develop in line with the technology.
Artificial intelligence (AI) is big business. From increasing productivity in industry and improving customer service via chatbots to making our streets a safer place, the possibilities are seemingly endless. The technology keeps moving at a rapid pace and one estimate puts the global market value at $62.4 billion in 2019, expected to grow 42 per cent each year to a staggering $733.7 billion by 2027.
This growth is clearly delivering significant benefits to businesses, governments and societies worldwide, but it does not come without its concerns. AI can be extremely invasive. For example, some AI systems can predict and build a detailed profile about an individual’s present and likely future behaviour. This can often occur without an individual even knowing it’s happening.
A number of legal challenges have already been raised over the use of AI technologies and there are strong calls to introduce new laws as the adoption of such technologies increases. However, at the time of writing there remains disparity between the potential use of AI technologies and the legal protection afforded to consumers and businesses to ensure compliance and ethical use.
Recapping privacy and copyright challenges
One of the most significant cases in the UK came in August 2020, when a court ruled that South Wales Police had not met its obligations in respect of privacy, data protection and anti-discrimination law when using automated facial recognition (AFR) technology in public places.
There have also been copyright concerns over the use of deepfake technology, which mimics celebrities singing or saying unlikely things. Elsewhere, Facebook reportedly abandoned an experiment in 2017 after two AI chatbots appeared to be chatting to each other in a language that only they understood.
It is certainly not my intention to scaremonger and ridicule the use of AI, which has unquestionable benefits. In December 2020, machine learning was deployed to defend the suspect in a murder trial at the Old Bailey when defence lawyers used the technology to analyse more than 10,000 documents more quickly and search for patterns and connections that might be unapparent or missed during human inspection. It is the first known use of machine technology at the Old Bailey and another indication of the potential for enhanced productivity and accuracy.
However, as with any developing technology, there is a need for dedicated legislation to ensure widespread implementation is done so compliantly and consistently. Despite becoming more widely used, the use of AI – and particularly facial recognition technology – is an anomaly in following this best practice with no individual piece of legislation regulating its use. Depending on who is using AI and why, its use is affected by the Human Rights Act, data protection, privacy and surveillance laws, as well as other legislation, which can result in a complex myriad of considerations for any organisation looking to use it.
The need for legislation
Facial recognition has become one of the most talked about technologies, not least because of its potential reach in society when compared with machine learning technologies monitoring an employer’s workforce, for example.
Around the time of the South Wales Police case, Elizabeth Denham, the UK Information Commissioner at the Information Commissioner’s Office, expressed deep concerns over the use of facial recognition technology in public spaces.
The judgement said there was insufficient guidance on usage, a data protection impact assessment was deficient and the force did not take reasonable steps to find out if the software had a racial or gender bias. Mrs Denham recommended that the Government introduces a statutory and binding code on the deployment of facial recognition and said work needed to be done by a range of organisations including the police, the Government and developers of the technology to eliminate issues and set parameters on how the technology needs to be used.
The ruling was a useful step towards that objective. The UK Government has also formed the Office for Artificial Intelligence and recently announced a £20m investment into AI research but there is still a lot of work to do in the area. Ultimately, a collaborative approach to usage guidance is required across all AI technologies if we are to safely, reliably and transparently maximise the potential without discriminating against any group or compromising privacy. Without it, there remains a risk that nervous employers may stall the deployment of AI technologies which might otherwise drive significant benefits to industry.
To to learn more about our expertise in the digital & technology sector, please visit the pages below.