Categories
Business guidance launches

ICO launches guidance on AI and Information protection

The Information Commissioner’s Office (ICO) has published guidance aimed at rendering the application of machine learning to data compliant with data protection principles By Brian McKenna, Business Applications Editor Published: 31 Jul 2020 16:00 The Information Commissioner’s Office (ICO) has published an 80-page guidance document for companies and other organisations about using artificial intelligence (AI)…

The Information Commissioner’s Office (ICO) has published guidance aimed at rendering the use of machine learning to information compliant with data protection principles

Brian McKenna

By

Released: 31 Jul 2020 16: 00

The Information Commissioner’s Office (ICO) has published an 80-page advice record for companies and other organisations about utilizing artificial intelligence (AI) in line with data protection principles.

The advice is that the culmination of 2 years study and appointment by Reuben Binns, an associate professor at the department of Computer Science at the University of Oxford, along with also the ICO’s AI team.

The advice covers exactly what the ICO thinks is”best practice for data protection-compliant AI, in addition to the way we interpret data security law as it applies to AI systems which process personal information. The guidance isn’t a code. It contains advice on the best way to interpret relevant law as it applies to AI, and recommendations on good practice for technical and organisational measures to mitigate the dangers to individuals that AI can cause or exacerbate”.

It seeks to offer a frame for”auditing AI, focusing on best practices for data protection compliance — whether you plan your AI system, or employ one out of a third party”.

It embodies, it states,”auditing tools and procedures which we will use in audits and investigations; detailed guidance on AI and data security; along with a toolkit designed to provide further practical support to organisations auditing the compliance of the AI systems”.

It is also an interactive record which invites further communicating with the ICO.

This guidance is reported to be aimed at two audiences:”those with a compliance focus, such as data protection officers (DPOs), general counsel, risk managers, senior administration, along with also the ICO’s personal auditors; and tech experts, such as machine learning specialists, data scientists, software programmers and engineers, along with cyber security and IT risk managers”.

It points out 2 security risks that can be exacerbated by AI, specifically the”reduction or misuse of the large quantities of private data frequently needed to train AI systems; and software vulnerabilities to be introduced because of the introduction of fresh AI-related infrastructure and code”.

For, since the guidance document points out, the standard practices for deploying and developing AI involve, by necessity, processing large amounts of information. There is therefore an inherent risk that this fails to comply with the data minimisation principle.

This, according to the GDPR [the EU General Data Protection Regulation] as glossed by former Computer Weekly journalist Warwick Ashford,”demands organisations to not hold information for any longer than absolutely necessary, rather than to alter the use of the data from the purpose for which it was initially gathered, although — at the exact same time — they need to delete any data at the request of the data subject“.

While the guidance document notes that data protection and”AI ethics” overlap, it doesn’t seek to”supply generic ethical or design principles to the use of AI”.

AI for the ICO

What’s AI, at the eyes of the ICO? “We utilize the umbrella term’AI’ because it has become a standard industry term for a range of technologies. 1 notable area of AI is machine learning, that’s using computational techniques to make (often complex) statistical models using (typically) large amounts of data. Those models may be used to make forecasts or classifications about new data points. While perhaps not all AI entails ML, most of the current interest in AI is driven by ML in some manner, whether in image recognition.

“This advice thus concentrates on the data protection challenges which ML-based AI may pose, while acknowledging that other kinds of AI can give rise to additional data protection challenges.”

Of special interest to the ICO is the idea of”explainability” at AI. The guidance goes on:”in collaboration with the Alan Turing Institute we have produced guidance on how businesses can best explain their usage of AI to individuals. This resulted in the Explaining decisions created with AI advice, which was printed in May 2020″.

The guidance includes commentary about the distinction between a”control” and also a”chip”. It says”organisations which determine the purposes and way of processing will be controls regardless of how they are described in almost any contract about processing services”.

This might be potentially relevant to the controversy surrounding the participation of US data analytics company Palantir’s in the NHS Data Store project, where has been repeatedly stressed the provider is merely a chip and not a controller — which is actually the NHS in that contractual connection .

Biased information

The guidance also discusses such matters as bias in data collections leading to AIs making biased decisions, and offers this information, among other pointers:”In cases of imbalanced training information, it can be possible to balance it out by adding or removing data about under/overrepresented subsets of the population (eg incorporating additional data points on loan applications from girls ).

“In circumstances where the training data reveals past discrimination, you could either alter the information, change the learning procedure, or modify the model after training”.

Simon McDougall, deputy commissioner of regulatory innovation and engineering in the ICO, said of the advice :”Understanding how to evaluate compliance with data protection principles could be challenging in the context of AI. From the exacerbated, and novel, security dangers that come to the potential for bias and discrimination in the data. It is hard for compliance experts and tech specialists to browse their way to AI systems that are compliant and viable.  

“The guidance contains recommendations on best practice and technical steps that businesses can use to mitigate those dangers caused or exacerbated by using this technology. It is reflective of present AI clinics and is almost applicable.”

Content Continues Below

Read more on Artificial intelligence, automation and robotics

Read More