Partner Susan Thompson and Associate Andrew Lloyd explore the employment law implications of the “tech revolution” in the workplace
Susan and Andrew’s article was published in Relocate Magazine on 17 May 2022, and can be found here.
The last few years in the workplace have seen a “tech revolution”, and the pandemic has accelerated the changes that were already taking place. Offices are going paperless, meetings are moving from in person to online and the new technology has made working from home possible in many occupations. As communication and document management become increasingly computerised, companies are also looking for similar tech-related efficiencies elsewhere.
During the multiple lockdowns, there was a marked increase in using artificial intelligence (AI) for interviews. Candidates found themselves answering pre-recorded questions to blank screens and their CV’s were increasingly sifted through algorithms before being seen by any HR professional or hiring manager.
Virtual reality (VR) and facial recognition have taken this process even further. Algorithms can judge not only a candidate’s answer, but their tone and general demeanour as well. VR headsets have even been used to simulate virtual work environments.
However, despite the increasing reliance on technology in the workplace, AI has its limits. Employers still require human input, as relying on AI alone is not always possible. This is particularly true when judging subjective or technical skills e.g. a makeup artist’s technical skill, as was the case for Estée Lauder.
More importantly, employers need to be able to justify their decisions. This is not merely good business practice; it is essential if employers are going to protect themselves from potential employment claims. Relying on AI in a redundancy situation is particularly high risk. An employee facing termination of employment by reason of redundancy is more likely to challenge a decision than say a candidate seeking employment.
Risks of AI
Estée Lauder has discovered the risks of relying on AI first-hand and to its detriment. It recently paired up with HireVue, a company best known for using software to interview and screen candidates. HireVue has carried out more than 25 million interviews and claims that its software improves decision-making and efficiency.
HireVue has argued that its system encourages diversity by avoiding human bias. Unconscious bias (while still a controversial concept) is of increasing concern to companies, and efforts to tackle unconscious bias are becoming part of many companies’ ESG and diversity plans. For large companies, this is becoming more important with the move towards gender pay gap reporting and diversity targets (as well as growing public pressure for employers to demonstrate a commitment to equality).
However, while it is possible that human decision makers have unconscious bias, companies should be careful about assuming that AI-based algorithms are impartial. Ultimately, all software reflects the people that programmed it and the data that was inputted.
Amazon previously created an AI tool that was designed to use data submitted by successful job applicants from the previous 10 years, to decide who would be invited for a face-to-face interview. It was expected that this system would be fairer and lead to a more diverse team. However, most of the data that was used to inform the system came from successful male candidates and Amazon’s algorithm taught itself that male candidates were preferable. Alleged unconscious bias in hiring managers was replaced by a system that actively favoured men. This system was scrapped in 2018.
A recent BBC documentary on this topic called “Computer Says No” also addressed problems with automated recruitment. In some extreme cases, algorithms using facial recognition have (without clear explanation) negatively scored BAME candidates for demeanour. Other problems have included AI not understanding regional accents and preferences for received pronunciation.
Estée Lauder’s HR team recently used its HireVue interview software in a redundancy process. It needed to make cuts to its team of makeup artists, and several team members were put through an AI interview process to see who would remain in place and who would be made redundant.
Notwithstanding the potential problems of inbuilt bias in AI, the main problem with Estée Lauder’s decision making is that nobody knew how HireVue’s algorithm came to its decisions; neither the HR managers nor the employees at risk of redundancy knew why certain employees were selected.
Several of the women who were dismissed by Estée Lauder appealed the decision to make them redundant. When justifying the use of AI, Estée Lauder’s HR team were only able to say that over 15,000 data points were used in the decision. This is not a satisfactory answer when nobody could explain what those data points were.
Estée Lauder’s redundancy process illustrates the problem with relying on very complicated algorithms to make decisions. Algorithms can be programmed to make decisions, but they are worse at explaining how decisions are reached. When a piece of software uses 15,000 data points, it is nigh impossible to justify why the best score was given to one candidate and not another.
The makeup artists at Estée Lauder had a strong sense of their technical skills and sales figures, which one would expect to be essential in a traditional redundancy process. It was not clear whether these were decisive, though. Other factors, such as demeanour and choice of words, were judged by the AI in addition to more measurable work skills, and it was not clear which factors were decisive to the AI.
When employees are placed at risk of redundancy, their employer needs to be able to explain why some of them are retained and others are dismissed. This can be based on objective measures (i.e. sales figures), subjective ones (i.e. flexibility, interpersonal skills) or a mixture of both. However, the system needs to be fair, reasonable and transparent.
Through the consultation process, employees must be able to challenge decisions made about them and critique the scoring system if key skills are not accounted for. As Estée Lauder have discovered, that is very hard to do with advanced AI software that uses thousands of pre-programmed data points.
Interestingly, HireVue was not aware that its software was being used in a redundancy context. It appears that Estée Lauder simply used its normal interview software in a redundancy situation without alteration. While there are questions about use of AI in recruitment, it is surprising that Estée Lauder chose to use a system in a redundancy that is not able give clear feedback. This is particularly the case for makeup artists, for whom technical and interpersonal skills are key.
When the employees dismissed by Estée Lauder brought claims, the company opted to settle these claims. This is unsurprising, as it is difficult to imagine any tribunal ruling that a redundancy is fair when the decision makers (still officially the company management) could not explain how they reached their decisions. “The computer said so” would not be a satisfactory answer in any employment tribunal.
AI may be able to help streamline HR processes such as interviews and redundancies. Video interviews offer a cost-effective and flexible alternative for both employers and candidates who like being able to have interviews outside of office hours. AI interviews are a growing business and are likely here to stay; the global recruitment tech market is expected to be worth £35 billion by 2028. However, companies need to remember that software is programmed by humans, and that algorithms learn from pre-existing human behaviour. Human judgement cannot be removed from something as final as termination of employment by reason of redundancy, and if a computer assists a human decision, the decision-maker needs to know what the computer has been asked to do. Furthermore, despite advances in technology, we are clearly not yet at the point where technology can eliminate human decision making or human bias entirely.