An AI Ethical Issue When Selecting People

An open-office environment with young employees working around several tables.

Last week I attended the Teaching and Learning with AI conference in Orlando, Florida. One session that got me thinking was Chrissann Ruehle’s about ethics in AI. She talked about several areas of ethical concern, including the biases that can be introduced by AI. This kind of bias produces an AI ethical issue when selecting people into jobs because it can result in poor decisions that disadvantages people based on irrelevant factors. This means that we have to be extremely careful in using AI to help us decide who to hire, and not leave it to the AI alone to make decisions.

Job Relevance Is the Key to Sound Hiring

The best hiring practices base decisions on characteristics of people that are relevant to the job, that is, they have job relevance. This begins by identifying the KSAOs (Knowledge, Skill, Ability, and Other characteristics) necessary for job success, typically by conducting a job analysis. Assessment techniques are then chosen to determine applicant levels of the key KSAOs to allow a match between the talents of the applicant and the requirements of the job. This traditional approach to selection is to purposely connect people to jobs. It is the foundation of legally sound hiring and results in hiring the best talent because those hired have the right stuff for the job. It is an approach that graduate students in industrial-organizational psychology learn in school as they take coursework in how best to achieve the match between applicants and jobs.

Bias Undermines Sound Hiring

Bias occurs when irrelevant information enters the decision process, undermining the match between applicant and job. Biases can be based on a wide variety of characteristics such as age, gender, ethnicity/race, physical appearance or aspects of background that are irrelevant. We know that there have been historic biases against certain groups that has led to laws in many countries making discrimination illegal. But there can be other kinds of bias that have little to do with group membership. One hiring manager might be biased in favor of people who are tall, whereas another might prefer people who are sports enthusiasts. These biases are not always conscious, as sometimes a hiring manager might get a gut feeling that someone is right for the job without being fully aware of the reasons. One of my graduate school mentors, Herb Meyer, related a story from his industry career of asking a group of hiring managers if height was an important characteristic for being an engineer. They laughed and thought he was joking until he told them that they were only hiring tall engineers. They were unaware of that bias.

The image at the top of this blog article illustrates AI bias (the idea comes from Kevin Yee‘s preconference talk). I asked DALL-E to generate an image of eight people at work. Note that they are all young and fit, the men have beards and are better dressed than the women, and it is a white-collar setting in an open-office environment. This is typical of the pictures it gives me whenever I ask for a picture of people working. Maybe this looks like the offices at OpenAI, the company that developed DALL-E, but it is hardly a representative picture of the working population.

An AI Ethical Issue When Selecting People

To use AI for selection, we “train” the AI to identify the characteristics of successful performers on the job. The characteristics that the AI learns is generally not easy to identify. Where the traditional approach is to intentionally match people’s characteristics to job requirements, the AI is based on something more like a prototype matching. The AI has a snapshot of what a good performer looks like, and attempts to match individuals to that snapshot. This is not unlike an approach where hiring managers choose those who give them the best gut feeling. What those managers are looking for is also not clear, and this is where bias can creep in. Hiring biases that might exist in the world can be amplified if the AI learns to model those biases. The AI, for example, when given examples of successful executives, might learn that they are usually male, that they usually have MBA degrees, or that they are tall because it is likely that a sample of past executives tend to be tall men with MBAs.

Supervise AI To Reduce Ethical Issues

It should be kept in mind that when it comes to important decisions, whether it is hiring people, choosing students for school, or something else, AI is a tool for people to use, not a replacement for people. We need to keep a close eye on AI, paying attention to its recommendations. There are several actions that can be taken.

  • Pay attention to the training set. The old adage garbage in garbage out applies to the training of AI. If you are hiring for a particular position, it is important that the characteristics of the people the AI is learning about are relevant to the job. Avoid giving the AI irrelevant information, like demographics, that can result in bias.
  • Conduct validation studies. With traditional selection approaches, we conduct research to see how well our assessments can predict job performance. We could do the same with AI. Ask AI to classify a sample of job applicants as high versus low potential and then follow-up six months after hiring to see how well the AI did.
  • Check for adverse impact. In the U.S. adverse impact means that some demographic groups are less likely to be hired than others. In the traditional KSAO approach, we check for adverse impact, and when found, we can sometimes adjust our methods to minimize it. The same could be done with an AI.
  • Model the AI. As the AI makes recommendations for applicants, check the characteristics of those identified as high versus low potential. See if you can find patterns in the AI judgment. Does AI appear to be using college degree, college major, years of experience, or other information? You could then check to see if those characteristics relate to job performance.

Keep in mind that the AI is a tool intended to help managers make hiring decisions. If used properly it can reduce subjectivity, but there can be an ethical issue when selecting people because the AI can introduce bias. The use of AI for selection is still very new, so it should be used with caution until we better understand how to overcome the ethical issues that it can create.

Image generated by DALL-E 4.0. Prompt: “Image of eight people at work.”

SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE

Join 1,266 other subscribers

Image created by DALL-E 4.0. “Prompt:

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.