Artificial intelligence is increasingly peopleâs interviewer, colleague and competition. As it burrows its way further into the workplace and different job functions, it holds abilities to take over certain tasks, learn over time and even have conversations. Many of us may not even be aware that who weâre talking to isnât even a âwhoâ but a âwhat.â
In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the âOutlook on Artificial Intelligence in the Enterprise 2018â report from Narrative Science, an artificial intelligence company, in collaboration with the National Business Research Institute. In the communication arena, 43 percent of these businesses said they send AI-powered communications to employees.
Many candidates donât even realize that theyâre not speaking to a human, according to Sahil Sahni, co-founder of computer software company AllyO, which uses an AI-enabled chatbot to speak to candidates and answer questions in the recruiting process.
Based off data from AllyOâs applicants, he found that less than 30 percent of candidates think that theyâre speaking to something not human. The other 70 percent either did not disclose what they thought or believed thereâs a person behind that chatbot.
AllyO does not disclose up front to the candidate that they are not speaking to a human. However, if they were to ask outright if they are speaking to a person or an AI-enabled chatbot, the system discloses that information. âThe goal is not to goof anyone here. The goal is to have the best candidate experience. Lying about it is not the best candidate experience,â Sahni said.

Candidates donât behave differently when speaking to an AI as opposed to a human, Sahni added.
âWhen youâre a job seeker, itâs not like youâre calling customer service to complain about something. Youâre at your best behavior,â he said. âYou tend to be a lot more tolerant, you tend to be a lot more respectful, no matter what the process might be.â
Dennis R. Mortensen, CEO and founder of New York-based technology company X.ai, also has access to conversations between people and machine agents, and his team spent the past four years assembling a data set of more than 10 million emails on these dialogues. Their findings have similarly found that people donât communicate differently just because theyâre speaking to a robot.
Giving X.aiâs own personal assistants Amy and Andrew as an example, he said, âIt would be very easy to imagine that I will treat them like machines and remove any level of emotion otherwise applied to a traditional conversation with a human, or that the system as a whole would not leave any room for empathy toward the machine. I am happy to say that it is not the case.â
This is not to say that everyone treats a machine with respect. If people tend to be more aggressive or rude with a real person, that same communication style can be seen in how they converse with a machine. The same trend goes with people who are neutral or overly friendly in how they speak to others.
Also read:Â Artificial Intelligence, Automation and the Future of Talent Acquisition
How potential employees actually speak to AI is a different conversation than how potential employees should speak to AI, he added. That is, itâs unclear whether how a person treats a machine says anything about how that person would treat other people, and itâs unclear whether something like a person being rude to a machine agent should impact their job prospects.
âWe can certainly agree that we do care if itâs a human recruiting coordinator,â Mortenson said. But machines have no feelings or emotions and cannot be offended, so it would be easy to argue why employers shouldnât care. Ultimately, âI do think we should care even if it is a machine,â Mortenson said. âI understand why we might care a little bit less, but I donât think we can just discard that as a signal.â
He gave the example of a report which found that this technology could have implications on how kids learn how to communicate and teach them that speaking harshly or impolitely to people has no consequences.
âIn real life thereâs a penalty to being an asshole,â Mortensen.
Limits and Capabilities of AI in the Hiring Process
Machine learning allows AI to gain knowledge over time and learn from its interactions, much like a person would. That being said, even though it has the ability to mature in its own way and become more humanlike over time, that still doesnât make it human, and there are certain questions that a person might have to answer, for example, questions about company culture, according to Sahni.
AI systems are capable of taking this into account. For example, AllyO can recognize when a candidate asks a question that cannot be answered by a machine and brings in a person who can answer that question, Sahni said. This way, the candidate can have a positive experience and not feel like theyâve lost out by not speaking to a real person.
âIf the process is objective, AI knocks it out of the park. If the process has any subjectivity to it, AI does really well looping in the hiring team,â he said. âA good AI system typically has human support behind it.â
Much like people themselves, AI has the potential for bias, according to Eric Shangle, director of people operations at AI platform Figure Eight, based in San Francisco. For example, Wired reported in July 2018 that Amazonâs facial recognition software system Rekognition confused many black members of Congress with publicly available mugshots and that facial recognition technologyâs problem in detecting darker skin tones is a well-established problem.
One reason why a tool may be biased is training data bias, Shangle said. From the developmental side of machine learning, the creator of a tool must input a data set to train the algorithm, and if it does not use a diverse data set, then an employer using the tool may come across bias blind spots.
âWhat are the biases of this tool?â is a legitimate question for employers who are looking to purchase a machine learning tool such as facial recognition software, Shangle said. A recruiting tool may, for example, have a bias toward college-educated job seekers.
David Dalka, founder of Chicago-based management consulting company Fearless Revival, agrees that AI has its limits. He has a more traditional view of what recruiting should look like, arguing that companies should invest less in technology and more in human recruiters who work at the company long-term, know the company culture and know what kind of person would be a best fit for the job, rather than look for trendy keywords or job titles in résumés.
âIâm not opposed to AI tools if someone built the full data library of all the factors and stopped focusing trivially on things like job titles,â he said.
He suggested that companies should more carefully consider the attributes that matter in a candidate â Do they read any books? Are they naturally curious? What are their skills and degrees? â and consider how they would weigh these attributes in an AI system. Ultimately AI is simply a tool that analyzes content.
âThis idea that some wizard will magically create this black box that will hire the right people without you thinking of these things is a fallacy,â Dalka said.
This article originally appeared in Talent Economy.