The algorithms are touted to be free from human bias, however certain empirical evidence suggests otherwise
Technology has become an indispensable fact of life. Organisations and individuals have started to explore and adopt innovations to bring an unprecedented shift in our lifestyle. Have you ever thought about artificial intelligence performing the task of recruiting the suitable candidate? A job aspirant records her short introductory video, uploads a text resume and shares the links to her social media profiles on a job application portal. Then a computer applies its artificial intelligence to analyse the text, videos and social media posts to inform the recruiter whether the candidate is a right fit for the job position. While such advancement might still sound like a distant future, some organisations are already using the algorithms to screen the profiles of the prospective employees. Giving further impetus to the automation drive, several suppliers offer a wide array of tools related to screening of resumes, vetting digital reputation and analysing video interviews, using supposedly advanced (and unbiased) machine learning algorithms.
However, the fundamental question is how effective are these tools and whether it can achieve the objective of shortlisting the right people for the organisation?
Social media and digital reputation: At times organisations have a desire to evaluate the ‘digital reputation’ of the candidates by analysing the online behaviour and social media activities. The primary objective of this exercise is to better understand the prospective employee’s personality, moral standing and socio-political viewpoints. There have been instances, where the personal opinions expressed on social media platforms have led to dismissals and lost opportunities for individuals. For example, the actor Hartley Sawyer was fired from well-known TV series (The Flash) after his seemingly racist and misogynist tweets surfaced on the digital world. Therefore, it might be helpful for the job seekers to self-evaluate their digital reputation and manage the same by remaining thoughtful while expressing views on digital and social media platforms.
Algorithms & Recruitment: For organisations receiving a large number of applications, it is beneficial to have a software to quickly scan through the resumes and shortlist promising candidates. The algorithmic screening of profiles removes any potential prejudices and subjectivity involved in the conventional recruitment process. The proliferation of machine learning systems in recruitment and staffing is so extensive that through powerful programming the recruiters can get automated suggestions on applicant match to specific job roles. Using certain (behavioral) indicators, it is even possible for the artificial intelligence systems to predict the degree of compatibility between the job applicant and her future supervisor.
Digital reputation and automation in recruitment - A word of caution
While vetting the digital reputation and automating the screening process shows immense potential, the recruiters need to be careful about potential pitfalls of extrapolating the results. There are concerns about scooping the social media data and posts as it raises the issue of ‘privacy’, and whether the applicants wish to be evaluated on their social media postings. Moreover, it is yet to be convincingly demonstrated that the social media postings are related with good or bad performance of the prospective employees. On the hindsight, how would recruiters react if the candidates choose to remove their presence from social media and professional networks on grounds of being evaluated on those parameters?
The algorithms are touted to be free from human bias, however certain empirical evidence suggests otherwise. For instance, Amazon realised that their machine learning algorithms were biased against women. Likewise, in another instance the algorithm considered commute distance for work as one of the retention parameters, and therefore rejected applicants located at a far distance from the office. It was also found that most of the applicants, staying in the outskirts of the city (leading to longer commute time) belonged to poorer and historically disadvantaged social groups. Yet, another anomaly was noted while vetting a resume screening algorithm which used two key parameters viz. playing ‘lacrosse’ and the name ‘Jared’ to predict job success of the applicants. Such instances were primarily because of the pre-existing biases in the training data used for generating deep learning models, where inputs were taken from erstwhile (biased) hiring practices and attributes of best performing employees.
While automation and digital reputation have emerged as widely adopted tools, the larger question to bear in mind is to what extent can recruitment and staffing processes be impersonalised and still be acceptable?A model which promises to lend objectivity to the process by coming up with highly convincing and predictive factors, may in reality, could have trivial ability to predict desirable performance in a foolproof manner. The minimum which the recruiters could do is to remain aware about the interplay of several external environmental factors and potential discriminatory biases embedded in the training data set being used for evaluating resumes and digital reputation.
The authors are Randhir Kumar, Assistant Professor, Indian Institute of Management Calcutta, Prakriti Dasgupta, Consultant, People in Business (India) LLP, Bengaluru.
DISCLAIMER: Views expressed are the authors' own, and Outlook Money does not necessarily subscribe to them. Outlook Money shall not be responsible for any damage caused to any person/organisation directly or indirectly.