ICO’s guidance on the use of artificial intelligence

Olivia Satchel

Olivia Satchel

Lawyer

Artificial intelligence (AI) is gaining momentum in both private and professional life. ChatGPT, in particular, is on everyone’s lips. The tool can be used to do homework, write professional articles, or even blog posts like this one. Spoiler: The text does not come from ChatGPT, but from us. Nevertheless, it is conceivable that ChatGPT could also have written this blog post.

The Information Commissioner’s Office (ICO) has responded to this development by updating its Artificial Intelligence Guidance and has announced that it will provide further updates in line with technical developments in this field. The guidance is intended to explain the data protection aspects and steps to companies in both the development and use of AI (see this link).

Why guidance on AI?

The ICO published the guidance to help organisations mitigate data protection risks and to explain how data protection principles apply to the use of AI in an organisation or business environment. The guidance not only draws on the United Kingdom General Data Protection Regulation (UK GDPR) but also on broader data protection principles embedded in additional UK data protection law.

The guidance focuses on AI decision-making that often raises the issue of the so-called black box , i.e., decisions that are reached by AI systems with inner workings that are inaccessible to ordinary human understanding. This is often the case when machine learning algorithms are used in the analysis of large data sets to identify patterns and correlations, such as in the example of ChatGPT. The developer may not know in advance what data will be interpreted and how variables will be analysed.

This triggers several conflicts with data protection principles. Transparency within the decision-making process is called into question in terms of what the AI system is doing and how it is using an individual’s data such that data subjects can exercise their rights. Other concerns include quality and security of the data entered into an AI system as well as accountability of the designer or operator of an AI system.

Data protection principles in the AI context

The accountability principle makes data controllers and processors responsible for complying with data protection and demonstrating compliance. In the context of an organisation using AI systems, you are required to

  • be responsible for the compliance of the AI system,
  • assess and mitigate its risks, and
  • document and demonstrate how the system is compliant and justify the choices you have made.

It is also highly important to consider your role within the use of an AI tool, meaning you need to assess and document if you are a controller or processor. If you would like to use a third party’s AI tool, you should do a documented review of it to be able to demonstrate its privacy design.

In most cases, data controllers will be legally required to conduct a data protection impact assessment (DPIA) for the use of an AI system if personal data is processed. Conducting a DPIA will ensure that the reasons for and use of an AI system to process personal data are considered as well as the potential risks and how they can be mitigated.

To ensure that data protection and other fundamental rights in the context of AI systems are respected, you need to ensure that in the design or development stage, you identify and assess what rights might be at stake. Then, determine how you can manage them in the context of the purposes of your processing and the risks it poses to the rights and freedoms of individuals, and make sure that there is compliance with all data protection principles.

Most updates in the ICO guidance have been made in the chapters of lawfulness, fairness and transparency. As transparency is key to every processing of personal data, the ICO issued an own guidance on this topic, called the Explaining decisions made with AI Guidance (see this link).

When an organisation uses AI systems, it must always ensure that it fulfils the principles of lawfulness, fairness, and transparency in the personal data processing. The black box issue makes this particularly challenging for data controllers. Accordingly, even developers and designers often cannot predict how an AI system reaches decisions; thus, fairness cannot be guaranteed due to a lack of transparency.

The new guidance can assist you to develop a GDPR-compliant information for data subjects in accordance with Art. 13 and 14 UK GDPR when using AI, as it describes in three parts the basics of explaining AI in general, explaining AI in practice and the impact of those explanations for your company.

Further, a lawful basis must be established whenever processing personal data. In many cases, consent that is freely given, specific, informed, unambiguous and involves a clear affirmative act on the part of the individuals can be the appropriate legal basis, if you have a direct relationship with the data subjects. In AI systems, the particular purposes for the use of data cannot always be fully predicted as processes are not entirely transparent. Furthermore, individuals must always be able to withdraw consent, which can become complicated if personal data is put into an AI system.

The legal basis for performance of a contract or legal obligation can only be valid if the use of AI is absolutely necessary. In general, processing activities without AI can take place here to receive the same outcome. Therefore, such legal bases can only be used in very limited cases, if any.

Considering the legal basis on overriding legitimate interests, you must demonstrate those interests of yours. You best do so by conducting a legitimate interests assessment (LIA). However, as you must balance the interests of the data subjects against yours, when it comes to AI, most of the times the interests of the individuals may outweigh those of the company using AI.

For processing of personal data you always need to take fairness into account. Fairness is given when you only process personal data in ways that can be reasonably expected and is not of discriminating factors, especially not against Human Rights. As AI is complex and can easily provide discriminating outcomes without intention of the developer, privacy by design and by default are key. A DPIA is a great tool to develop the necessary measures in this regard to ensure a safe and compliant AI, as it will allow you to learn potential risks and their harm to individuals and react to those with the correct and state of the art measures.

In case of accuracy, it is necessary to highlight that accuracy in data protection differs from the statistic accuracy in regard to AI. Accuracy in data protection is a principles of Art. 5 UK GDPR, requiring you to ensure that personal data is accurate and, where necessary, kept up to date. In AI, accuracy describes the number of times the AI tool guessed the correct answer.

To ensure accuracy under the UK GDPR, you need to test your AI on its accuracy and implement the AI in such a way to only acknowledge its outcomes as guesses rather than facts. Also, only when the AI is accurate, you can process personal data in a fair, lawful and accurate manner. So it is important to take the right measures. The guidance gives examples of what measures can be for AI processing, for example to detect phishing emails.

Art. 22 UK GDPR, which applies to all automated individual decision-making and profiling, introduces additional rules to protect individuals from being subjected to a decision based solely on automated processing that has legal or similarly significant effects on them. This is only allowed if the processing is (1) necessary for the entry into or performance of a contract, (2) authorised by law, or (3) based on valid consent. Thus, to comply with Article 22 GDPR, you must ensure that you

  • give individuals information about the processing,
  • introduce simple ways for them to request human intervention or challenge a decision, and
  • carry out regular checks to make sure your systems are working as intended.

Not every AI automatically falls within the scope of Art. 22 UK GDPR. To evaluate if your AI tool provides an automated decision-making as defined here, you have to evaluate what kind of decision is made, when the human-determined processing would take place and the context of the decision. When AI falls within this scope is described in more details in the additional guidance on Art. 22 UK GDPR by the ICO (see this link).

AI systems usually process large amounts of data. The ICO refers to the risk of failing to comply with the principle of data minimisation. Accordingly, you may process the personal data that you need for the AI system. That means you must consider if the data you are processing in an AI system is “adequate, relevant and limited” to what is necessary. Especially in the context of machine learning AI, where AI is trained by processing large amounts of data, it can be challenging to comply with the data minimisation principle.

Data security is also a huge consideration when it comes to AI. Several attacks on personal data arise which must be mitigated as much as possible. Measures can be constant revision of the data and blocking of accounts or rate-limiting, where you limit the amount of requests in a certain time period.

Rights of data subjects in the context of AI

If you use data that falls under the special category data, such as ethnical, racial or religious information, potentially additional conditions may apply under the UK GDPR and the Data Protection Act 2018 (DPA 2018). This means that to process special category data in an AI system, explicit consent may be required.

If you may process personal data in an AI system to make decisions or predictions, you must ensure that suitable measures to safeguard data subjects’ rights and freedoms are in place. According to Art. 22 (3) GDPR, you must ensure that a data subject can

  • obtain human intervention in decision-making,
  • express their point of view,
  • contest decisions.

If the lawful basis for processing is based on consent, data subjects must be allowed to withdraw the consent if they object the processing. If there is no other legal basis to justify the processing, controllers must immediately stop doing so. Furthermore, as envisaged in Recital 71 UK GDPR, you must ensure to correct inaccuracies, prevent errors, and prevent discriminatory or bias effects as a result of the AI system.

For controllers to fulfil these obligations when processing personal data in AI systems, you should adhere to the following best practices to ensure compliance with data protection principles.

Best practices for data protection compliance in AI systems

The ICO focuses on recommendations for organisations that use AI systems in making sure they are complying with their data protection obligations. A key challenge is the justification of output produced by AI systems. This is essential for controllers to be able to fulfil their obligations towards data subjects as well as to comply with the principles of lawfulness, fairness, and transparency. Organisations are therefore advised to do the following to ensure that data protection duties are not neglected:

Identification

First and foremost, organisations should identify if and which AI systems are used that make decisions or predictions about individuals. You should clarify if personal data is processed, and if special categories of personal data are processed, the purposes for the processing and how to establish a lawful basis.

Risk assessment

Where personal data is at stake in the AI system, organisations will have to conduct a DPIA, which focusses on the potential risks to the rights and freedoms of affected individuals. This will allow you to identify the risks and possibilities of how to mitigate these risks.

Documentation and implementation of an AI system

Organisations that either develop an AI system themselves or supply it from a third party are data controllers and are thus obliged to comply with data protection laws and principles.

You should document the process behind the design and the implementation of the AI system and its outcomes. This documentation should be understandable to people with different levels of technical knowledge and should be used as evidence to explain how decisions were made.

Measures include those involving privacy by design and other technical and data security standards. If you procure the AI system from a third party, the contract with that party must ensure that relevant due diligence and data security obligations are incorporated.

In practice, this means that you must ensure that

  • the steps involved in the creation and deployment of an AI system are explained, helping to produce and develop company policies and procedures to address and mitigate specific risks,
  • each step that has been taken in the design and deployment phase is documented to explain potential outcomes, and
  • any third-party supplier of AI systems can explain how AI has produced its output.

Monitoring

To ensure that updates or changes of AI systems will not negatively affect the fulfilment of data protection obligations, controllers should undertake ongoing monitoring, particularly concerning data accuracy and quality.

Concluding remarks

The ICO emphasises to an even higher extent in its update on the guidance that risk mitigation in the design stage is essential to building a basis of compliance with data protection principles. It further highlights that the development and use of AI are growing and evolving and that, accordingly, the authority will continue to work on clarifications as to how to ensure compliance with data protection principles.

On a final note, AI can become a great tool in your organisation to become even more efficient and innovative as well as provide more variety in your outcomes. However, only when you can comply with provisions of data protection laws, you may use AI on personal data. To ensure security and safety to the personal data you hold and process, you may use AI only in very limited circumstances or only without data which relates to individuals.

Internal guidelines, handbooks or strict instructions on the use of AI can help you staff to use AI to put you at the forefront of your industry, while implementing data protection.