Employers increasingly resort to algorithms and automated-decision making in recruiting new employees. While this is a time-saving and efficient way to filter out the most suitable applicant, companies must beware of data protection rules as well as other potential considerations. This article explains the ICO’s suggestions to companies for ensuring an effective, non-discriminatory and compliant use of algorithms for hiring purposes.
Principle of lawfulness, fairness and transparencyDecisions made in a hiring process are often rather subjective than objective. While a company’s HR department is trained in picking the best candidate based on their qualifications, beware that algorithms are not neutral and human-made. Thus, they often have biases embedded that lead to discrimination. If you wish to use automated decision-making in your hiring processes, it should be part of your data protection impact assessment (DPIA) to make sure AI is a necessary and proportionate solution before you start processing personal data. Don’t forget that the processing of personal data, especially sensitive personal data that are often included in applications, must be in line with the principle of lawfulness, fairness and transparency. Thus, any processing of personal data must have a legal basis and be fair and transparent in relation to data subjects. Accordingly, all algorithms must be fair and transparent, which means that any data that you process of an applicant must not have adverse, unjustified effects on the individual. Especially sensitive personal data, such as religious affiliations or health data, must not lead to discrimination.
DPIA in the context of HRBased on European Data Protection Board (EDPB) Guidelines, the ICO provides a list of processing operations for which you are required to complete a DPIA, as these are “likely to result in high risk”. This list of the ICO is non-exhaustive, which means that other processing activities not listed here may also require a DPIA. Regardless, the ICO considers it “best practice” to conduct a DPIA, whether or not the processing is likely to result in a high risk, in order to ensure all processing activities are in line with the respective data protection principles. The list below does not include all processing operations that require a DPIA; however, it gives you an overview and demonstrates the requirement to do a DPIA when using algorithms for employment decisions:
|Type of processing||Description||Examples|
|Innovative Technology||Processing involving the use of new technologies, or the novel application of existing technologies (including AI). A DPIA is required for any intended processing operation(s) involving innovative use of technologies.||
|Denial of service||Decisions about an individual’s access to a product, service, opportunity or benefit, which are based to any extent on automated-decision making or involves processing of special-category data.||
|Large-scale profiling||Any profiling of individuals on a large scale.||
|Biometric data||Any processing of biometric data for the purpose of uniquely identifying an individual.||
|Tracking||Processing which involves tracking an individual’s geolocation or behaviour, including but not limited to the online environment.||
Obligations of UK lawAlthough UK data protection law is straight-forward in that regard, the UK Equalities Act 2010 states that indirect discrimination can be justified, if proportionate. Therefore, in your DPIA, you must ensure that you assess whether or not possible discriminatory effects could be justified in being proportionate and in any case provide appropriate safeguards and technical measures for the phase in which the algorithm is designed and implemented. In addition, the EU General Data Protection Regulation (GDPR) prohibits “solely automated decision-making that has a legal or similarly significant effect”, unless one of the three exceptions applies:
- explicit consent,
- necessary to enter into a contract, or
- authorised by union or member state law.