Skip to content

Workforce

Tag: artificial intelligence

Posted on April 14, 2020June 29, 2023

Regulating recruiting amid constant technological innovations

recruiting, hiring, interviewing a candidate

As recruiters adopt advanced technologies in their quest to identify, court and hire candidates, attorneys are looking into the legal and regulatory issues those new tools may bring into play.

Lawyers, recruiting experts and technology vendors say legal teams are examining compliance concerns even as their colleagues in HR and IT evaluate products that leverage artificial intelligence, machine learning and other innovative approaches. Not only are they exploring the ramifications of privacy requirements such as Europe’s GDPR, they’re considering the possible impact of biases that may be inherent in a data set or unwittingly applied by algorithms.

recruiting, hiring, talent acquisition “I think we’re at the beginning of sorting out what all this means, but I think it’s definitely something people are thinking about,” said Jeffrey Bosley, San Francisco-based partner in the labor and employment practice of law firm Davis Wright Tremaine. “It’s a new technology and it’s evolving. Whenever you have a new technology, you do have growing pains and you do have these issues that come up,” he said.

Advanced technologies have gotten much attention recently, particularly as people inside and outside the business world consider the impact AI may have on jobs and livelihoods. At the same time, some well-intentioned efforts have generated media coverage for results that were diametrically opposed to what their developers set out to do.

In 2018, for example, Amazon abandoned an effort to build a machine-learning tool for recruiters after the system proved to be favoring men over women. According to Reuters, the tool downgraded resumes that included the word “women’s” as well as the graduates of two all-women’s colleges.

Also read: Is there room for an ethics code for tech companies?

Sources inside Amazon said the system, which had been under development since 2014, was meant to review resumes so recruiters could spend more time building candidate relationships and actually hiring people. It worked by comparing applicants against patterns found among resumes the company had received over a 10-year period. However, it didn’t account for the dominance of men in the technology workforce. As a result, the system machine-taught itself that male candidates were stronger than females.

Advanced technology “is at an awkward stage where it’s not really intelligent,” said William Tincup, president of the industry website RecruitingDaily.com. While he sees great potential for AI and other tools to streamline the work of recruiters and even address bias in the hiring process, he believes systems are limited in how much they can accomplish.

Why? In a word, people. “What are machines learning from their learning from humans?” Tincup asked. Hiring managers can’t help but operate with a number of possible preconceptions in their minds, from unconscious bias about race or gender to a preference for the candidate they most recently interviewed or who seems the most like themselves. Such biases, Tincup observed, live on in the makeup of a company’s existing workforce. And that leads to the troubles Amazon faced, where the data set reflects decisions made in the past more than it positions a process to understand needs of the future.

Technology Races Ahead

The situation is complicated by the idea that technology has outpaced legal and business practices. While they believe that will eventually change, analysts and technology vendors don’t see it changing quickly. 

“Right now, technology’s moving super-fast,” said Ankit Somani, co-founder of the talent acquisition and management platform AllyO, headquartered in Palo Alto, California. “Generally, regulators and the folks who control compliance standards don’t move so quickly. But, honestly, we’re like three lawsuits away from somebody taking it very seriously.”

Also read: Artificial intelligence is a double-edged sword. Here’s how HR leaders can properly wield it

 “Therein lies a real big rub,” Tincup said of regulation’s lag behind talent acquisition and HR practices. Nearly all of the processes involved with turning candidates into employees touch some kind of employment law or EEOC-related issues, but “all of those rules are outdated,” he said. “We’ve been working outside of the rules for 15 or 20 years. I would argue that there isn’t a company in the United States that’s 100 percent compliant from sourcing to outplacement.”

Talent acquisition teams, and HR in general, understand that and are beginning to adopt, said Brian Delle Donne, president of Talent Tech Labs, an industry analyst and consulting firm based in New York. However, he believes determining exactly how and where compliance fits in with the use of new technologies has been complicated by the way “artificial intelligence” has been “grossly generalized” in industry conversations.

“Most of the time they’re talking about machine learning, or sometimes just automated workflow processing,” Delle Donne said. “When you get into true artificial intelligence, where the machine is making decisions, it’s a higher threshold that’s required for our concern about the accuracy of [its] recommendations and predictions.” The distinction between true AI and what might be called “advanced technology” is important, he believes, because people assume that the machine is prescient when it’s usually not. “In most cases, it will be quite a while until machines are actually making decisions on their own,” Delle Donne observed.

Even in today’s state, the use of advanced technology has become widespread enough to raise concerns about whether it might, inadvertently, nudge an employer out of compliance. For example, AI-driven tools may use personal information in unplanned ways that a candidate hasn’t given permission for. That would raise privacy concerns. Or, tools might present results that, intentionally or not, run afoul of fair-employment legislation. “On both fronts, you’re talking about compliance statutory norms,” said Delle Donne.

AI’s Behavior

Such concerns, along with widespread speculation about AI’s impact, has made advanced technology “front of mind for many people,” said Bosley. In response, governments at all levels have begun generating “a patchwork” of laws that sometimes conflict with one another.

For example, Illinois’s Artificial Intelligence Video Interview Act went into effect Jan. 1, 2020. The law sets out transparency and consent requirements for video interviews, as well as limits on who can view the interviews and how long they can be stored. However, Bosley said, the law’s mandate to destroy videos within 30 days may conflict with the preservation requirements of other state and federal laws, including in the Civil Rights Act of 1964 and the Americans with Disabilities Act.

Also read: How Will Staney continues to change the talent acquisition game

“It puts employers in a position where they’re really going to need to assess risk,” Bosley said. “They’re going to need to come up with creative solutions to try and work around some of this risk.” 

Not all employers may feel exposed in the near term, Tincup suggested. He estimates that each year only a handful of legal actions are taken because of a candidate’s unhappiness with the recruiting process. People practices, technology practices and civil and social discourse are “way ahead of employment law,” he explained. “So is this something that’s going to create an immense amount of risk? No.” Employers today, he believes, put themselves at more risk by hiring a salesperson with a history of sexual harassment. In that regard, “you could spend more money in risk mitigation … than in recruitment technology,” he said.

At the same time, an organization’s risk may be based on activities that aren’t related to recruiting or the workforce, Bosley points out. “This isn’t just a human resources issue anymore. It’s not only an employment law issue anymore. It’s much broader than that,” he said. “You have data protection, data compliance, privacy and the potential for disparate impact claims as opposed to disparate treatment claims.”

Bosley anticipates more claims will be filed that look into a database’s contents, what data’s being looked at, how it’s being processed and whether algorithms are static or refined over time. Essentially, these claims will examine how advanced technology is making its decisions. “It’s going to be something where human resources leaders are looking to involve others in the organization and make sure that they’re both issue-spotting and getting ahead of some of these compliance issues,” he said.

 Indeed, Somani believes this notion of “explainability” — laying out what a system does and how it’s doing it — will become more important in the realms of recruiting technology and compliance. “There should, in my mind, be more compliance standards around that,” he said.

Evolving Standards

Even at a basic level, compliance standards for using technology in recruiting “don’t exist,” Somani said. For example, does texting about a job opportunity constitute a form of marketing? Is such a text permissible if it’s personalized? Because the answer’s not clear, he believes many companies are putting stricter guidelines in place.

Somani also said legal departments are becoming more involved in the purchase and implementation of recruiting technology. For tools handling communications, such as those that facilitate SMS messaging between recruiters and candidates, they’re trying to anticipate issues by creating policies that cover not only privacy, but data collection and permissions. “It’s an explicit ask in almost every deal we go into: ‘If a consumer doesn’t want to interact with your system, how do you follow that?’ ” he said. When it comes to issues related to AI’s under-the-hood work, vendors focus on transparency and disclosure by presenting disclaimers on their product or within their privacy policies.  

 For enterprises, compliance issues “can be a deal-breaker,” said Megan Gimbar, the Holmdel, New Jersey-based product marketing manager for iCIMS Hiring Suite, at least at the corporate level. While compliance and consistency are important components of her product, she said, talent acquisition teams often shy away from the topic.

In the past, employers tried to ensure compliance through training. Their approach, said Delle Donne, was to make hiring managers aware of interview questions that shouldn’t be asked (such as inquiring whether a woman intended to have children) or information that shouldn’t be considered (the candidate’s age or ZIP code). “That’s a fairly low bar,” he observed.

The bar began getting higher “once we started saying algorithms are going to make that determination for us,” Delle Donne continued. “Algorithms might actually do a better job, [or] may actually be set up in a way that they might do a better job, than humans do at avoiding compliance issues through bias.” However, he said, that requires planning and a focus on non-discrimination features when algorithms are designed.

Also read: The ethics of AI in the workplace

Compliance Further Afield

The compliance issues raised by using AI in recruiting aren’t limited to talent acquisition alone. For one thing, Somandi notes, recruiters today leverage a variety of tools that were introduced into other functions. 

Think of how candidate management systems and customer management systems align. When using those technologies, compliance may involve adapting the standards used by marketing or sales so they can be applied to talent acquisition and HR.

That road goes both ways. Even solutions designed for recruiters raise issues that aren’t unique to hiring, Delle Donne said. “As HR tries to digitize, there are many, many places where technology can streamline processes and save time and perhaps be more beneficial to the employee or the party,” he said. Many, if not all, of those will lead to some kind of compliance question. For example, a bot used in benefits administration may build a profile of confidential medical information. Or, a learning program might enter performance scores into an employee record without informing the employee. That could be a problem if those scores impact a person’s future promotions or career path.

As it digitizes, the tools implemented by HR “will bring in these technologies and there’s going to have to be some focus or some attention given to not inadvertently creating bias or discrimination, or revealing private information,” Delle Donne said. “If you take a step back, it just could be like whack-a-mole. I mean, ‘Hey, we see it over here in talent acquisition. Let’s go chase that down and… Oh, wait. We just saw this going on over there.’”

Scheduling employees is one major HR task for which technology can help. Make more accurate, data-driven scheduling decisions in just a few clicks with Workforce.com’s comprehensive scheduling software.

Posted on March 9, 2020October 18, 2024

The ethical use of AI on low-wage workers

warehouse workers, hourly employees

The impact of technology has not been equal among different segments of employees. 

The introduction of automation and artificial intelligence-enabled labor management systems raises significant questions about workers’ rights and safety, according to the “AI Now 2019 Report,” which explores the social implications of AI technologies. AI Now is a nonprofit that works with stakeholders such as academic researchers, policymakers and impacted communities to understand and address issues raised by the introduction of AI.

While the use of these systems puts more power and control in the hands of the company, it also harms mainly low-wage workers, who are disproportionately people of color, according to the report. These systems don’t work for employees when they set unrealistic productivity goals that can lead to injury or psychological stress and when they impose “unpredictable algorithmic wage cuts” on gig workers that undermine their financial stability, for example. 

Also read: Should there be a code of ethics in technology? 

warehouse workers, hourly employees
Hourly workers such as warehouse workers may be adversely impacted by AI-enabled workforce management systems.

Lower-wage workers stand to lose the most with the rise of automation while white-collar workers are generally unaffected, the report noted. It cited a McKinsey & Co. study that concluded “labor automation will further exacerbate the racial wealth gap in the U.S. absent any interventions.” 

Unions have been the traditional way for workers to contest harmful practices, but many employees don’t have access to union membership and many fear retaliation if they bring up their concerns. Still, the report noted, tech companies like Amazon and others are using many tactics to prevent unions from forming in their workforce. For example, whistleblowers have disclosed the fact that in a time of employee unrest, Google hired a consulting firm “known for its anti-union work.” 

It’s critical to get the perspective of hourly workers on how technology is playing into their lives, said Annelies M. Goger, a David M. Rubenstein Fellow in the Brookings Institute. Her research focuses on workforce development policy, the future of work and inclusive economic development. She was not talking about unions specifically in her interview with Workforce, but she did stress the importance of respecting and addressing employees’ concerns.

There are certain aspects of how technology is used in their jobs that hourly workers may appreciate, but they also have concerns or frustrations about issues like the influx of automated checkout lines and lack of consistency in scheduling, she said. 

“There’s a range of people who really want to embrace technology, but they want to make sure that workers have a voice at the table and that they have a way to provide feedback,” she said. 

These employees may also have concerns when management changes at their company, Goger said. 

As restructuring takes place new management might not take into account the needs of hourly workers, and these employees end up having less input on the quality of their jobs. 

Also read: Ensuring equity in the digital age

“Food, retail and grocery workers have witnessed rapid change in recent years, especially in the front end of their stores. Most feel they lack voices in these changes and feel pessimistic about the future for humans in their stores,” according to “Worker Voices: Technology and the Future for Workers,” a November 2019 paper by Molly Kinder and Amanda Lehnhart. Kinder is a David M. Rubenstein Fellow at the Brookings Institution’s Metropolitan Policy Program and a nonresident Senior Fellow at New America. Lenhart is the deputy director of the Better Life Lab at New America. 

“Worker Voices” also noted that low-wage workers’ low pay and economic insecurity is a barrier to them preparing for jobs that aren’t as impacted by new technology. An excerpt:

“While technological change is not the direct cause of workers’ precarity, it can add insult to injury. Automation and the adoption of new workplace technologies can exacerbate financial insecurity when jobs change, wages or hours are suppressed, or when workers are displaced altogether. Economic insecurity also limits workers’ resilience to technology changes by undermining their ability to weather a job transition, pay for training or schooling, and move into better paying—and less automatable—work. If workers cannot afford to make ends meet today, they will be ill-equipped to prepare for tomorrow. Raising income, reducing inequality and improving the economic security of workers is key to enabling a better future of work for those at greatest risk of change.”

Skill development is on some people’s minds. Chris Havrilla, leader of the HR technology practice for Bersin, Deloitte Consulting LLP, said that one application of AI could be to go through data and find potential new roles for people, in terms of talent mobility. From there, organizations can think about what employees need to accomplish and possibly help them develop the skills they need to get there. 

“I’m seeing some interesting things around, ‘We don’t want to lose people who already know how to work within our organization. How do we help them find other roles that might be applicable to them?’” she said. 

Posted on February 3, 2020June 29, 2023

The future of recruiting technology

Sector-Report-RPOs-Do-More-Than-You-Think-8b38574

There are more open jobs than talent to fill them and companies are willing to try anything to win this war. That’s great news for recruiting technology firms that promise companies innovative solutions to find, engage and hire quality candidates.

Venture capitalists continue to court the recruiting tech sector, delivering yet another record-breaking year of investment. By the end of the third quarter of 2019, VCs had invested more than $4 billion in recruiting technology firms, and the industry was expected to cross $5 billion by the year’s end. Many of those investments went to recruiting platforms, including Jobvite, which received $200 million in February to acquire three new recruiting platforms for its portfolio; SmartRecruiter, which raised $50 million in May; and Fountain, a platform to hire gig and hourly workers that landed $23 million in October.

“The war for talent is not going away,” said Denise Moulton, vice president of HR and talent research at Bersin, Deloitte Consulting in Boston. However, companies are getting smarter about how they select and validate the impact of their recruiting technology.

“There are new solutions coming to market all the time,” she said. That is putting pressure on vendors to demonstrate value if they want clients to stick around.

Some companies need tools that will help them more effectively uncover passive candidates, figure out how to mine the former applicant pool and identify internal talent who might be perfect for a current opening. Others are more focused on automation tools to help them engage with candidates, conduct video recording or improve and track the candidate journey.

AI Is Finally Paying Off

Many of these tools now feature artificial intelligence to add to the value proposition. And that’s finally a good thing.

The industry has been talking about AI in recruiting for years, but the current generation of tools are actually making an impact, Moulton said. “AI is boosting productivity, helping to analyze candidate pools, and making it easier to keep track of people who you want to keep in your funnel,” she said.

The use of AI and automation is freeing recruiters to become advisers, focusing on building relationships and capturing data to track outcomes, said Jared Goralnick, group product manager for LinkedIn in San Francisco. “Analytics are helping them set realistic expectations about the size of the talent pool, and the ability to reach new talent.”

AI driven analytics are also reducing the time to fill key roles, and helping companies address diversity and inclusion goals. “These tools can be game changers,” Moulton said. Though as always they only work if you have the expertise to ask the right questions and enough data to generate meaningful unbiased analysis. “The more data you can feed (a system) the smarter it gets over time,” she said.

Skills Versus Experience

The other big trend in recruiting tech is the rising use of assessments, as companies look for ways to vet candidates’ skills and attitude, along with their qualifications and experience. “Assessments are critical if you want to build a funnel of candidates that will be relevant today and for the long term,” Moulton said.

Several vendors have acquired assessments companies, including SHL’s November purchase of Aspiring Minds, an AI-driven talent assessment and interviewing platform; Hired’s February acquisition of Py, an app that assesses candidates coding skills; and Mercer’s 2018 acquisition of Mettl, an India-based talent assessment firm.

And other firms are building their own assessments. Most notably, late last year LinkedIn launched its Skills Assessment feature, which lets users complete dozens of free skills assessments that they can add to their profiles. The early assessments focus primarily on technical skills, but the company plans to introduce soft skills and personality assessments over time.

“It will make it easier for candidates to highlight their skills, and for recruiters to filter their searches,” said Goralnick. It will also make the search process more relevant for candidates and companies. He noted that LinkedIn research shows 69 percent of professionals think their skills are more important than their college education, and 76 percent would like to be able to verify their skills as a way to stand out in a candidate pool. The assessments will help them do that, he said.

Moulton urged companies to be thoughtful about the technology they choose and to be sure it will add measurable value to the talent acquisition process.

“You can’t pick up every shiny new penny,” she said. “You have to figure out what your team will really use and how it will integrate into the workflow.”

Posted on December 23, 2019June 29, 2023

Artificial Intelligence Is a Double-Edged Sword. Here’s How HR Leaders Can Properly Wield It

AI in HR, artificial intelligence

Unemployment in the United States stands at a 50-year low. The quit rate of workers hovers near an all-time high. And the number of open jobs continues to outpace the number of unemployed individuals.communication with artificial intelligence

Workers have reaped the benefit of this employment boom, through more job options and bigger paychecks. But it has ramped up pressure on HR departments grappling with recruiting and retaining top talent.

To help overcome these challenges, many are eyeing a double-edged sword: artificial intelligence. AI holds immense promise.

Technology that mimics human thinking by making assumptions, learning, reasoning, problem-solving or predicting, AI helps humans figure out who to hire and how to keep them. Such benefits for HR have already outweighed any setbacks.

But if HR departments wield AI without a proper understanding of it, they risk playing with peoples’ lives and their company’s brand. Indeed, a flawed AI program, or one used without the proper safeguards, could lead to hiring the wrong person, missing a deserved promotion, or systemic bias in the hiring process.

Few HR departments fully grasp AI’s potential and limitations. And that’s understandable. After all, AI’s role in human resources is still relatively new. HR departments being first and foremost people-focused often trail behind other departments in learning the latest technological innovations. Furthermore, HR now can benefit from the combination of AI technology maturing and the high volume of accessible data that powers AI. In recent years, the availability of data has increased exponentially.

So avoiding AI’s pitfalls and seizing on its opportunities first means knowing what they’re dealing with.

Today, some HR departments experiment with fairly basic forms of AI. For example, using platforms that scour thousands of online résumés to uncover and rank candidates against specified job requirements.

But the more advanced forms of artificial intelligence — programs that become more autonomous and smarter over time — will require greater caution. Imagine a training platform akin to Netflix’s recommendation engine, suggesting customized development resources and shaping a tailored employee learning path. Or think of a compensation program that monitors employee performance against real-time market trends to suggest the timing and size of pay increases for maximum retention.

It’s a tantalizing vision but one rife with peril. Even tech-savvy companies have run into problems. One retailer took the well-intentioned step of using AI to enhance its recruiting, but when the software developed systemic bias, they had to pull the plug. Trained on the data of past hires — predominantly men — the AI quickly “learned” to penalize female candidates, downgrading attendees of women’s colleges, for example.

This does not mean that organizations should shy away from AI. The technologies now coming online may truly revolutionize HR’s ability to find, hire, engage, and develop but only as part of a coherent plan with vigilance from the top.

To realize the gains and avoid the dangers, organizational leaders should:

  • Identify HR processes that could capitalize on a combination of machine and human intelligence — with the former’s computational muscle augmenting the latter’s judgement. Machine intelligence can analyze more data, more rapidly, than can humans. It can also spot patterns or correlations between factors that a human analyst might miss. For example, AI tools could recommend coaching topics that would accelerate time-to-productivity for new hires in a specific role.
  • Collaborate with other functions to determine how to best use AI in the company. As content experts, HR should lead the process, identifying areas that could be automated or where AI could be leveraged. IT should be an initial partner, but legal, risk management, data protection, data security, communications and even corporate social responsibility may also play roles.
  • Employ AI to “fix” AI — and humans. Tools like IBM’s Watson Recruitment suite already use systems that detect unseen bias from hiring data and natural language processing. According to IBM, it can spot whether past bias patterns are being reproduced — and fix them. AI scientists hope to increase transparency so that they, as well as skeptical auditors, will be able to see what’s going on inside. This will help them root out latent forms of bias and monitor the stability of their models over time.
  • Create new roles that facilitate the adoption of AI. AI-for-HR is likely to lead to new or expanded human resources roles. AI expert Tom Davenport suggests three clusters of such jobs: trainers who will teach cognitive technologies about capabilities; explainers who explicate the process and results, and sustainers who ensure the systems are performing well from an HR perspective.

In the coming years, the United States will continue to experience a tight labor market. Seismic demographic shifts will persist, including the exodus of baby boomers from the workforce and insufficiently small batches of new entrants replacing them. Such trends will only perpetuate the challenges to those charged with hiring and retaining talent. HR professionals will need to employ AI — carefully and intentionally.

Amy Lui Abel is the vice president of The Conference Board’s Human Capital Center.

Posted on August 7, 2019June 29, 2023

The Ethics of Artificial Intelligence in the Workplace

communication with artificial intelligence

Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior.

Despite its nascent nature, the ubiquity of AI applications is already transforming everyday life for the better.

Whether discussing smart assistants like Apple’s Siri or Amazon’s Alexa, applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, AI is quickly becoming an essential tool of modern life and business.

In fact, according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013.

Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.

Cementing the “intelligent” aspect of AI, advances in technology have led to the development of machine learning to make predictions or decisions without being explicitly programmed to perform the task. With machine learning, algorithms and statistical models allow systems to “learn” from data, and make decisions, relying on patterns and inference instead of specific instructions.

Unfortunately, the possibility of creating machines that can think raises myriad ethical issues. From pre-existing biases used to train AI to social manipulation via newsfeed algorithms and privacy invasions via facial recognition, ethical issues are cropping up as AI continues to expand in importance and utilization. This notion highlights the need for legitimate conversation surrounding how we can responsibly build and adopt these technologies.

How Do We Keep AI-Generated Data Safe, Private and Secure?

As an increasing number of AI enabled devices are developed and utilized by consumers and enterprises around the globe, the need to keep those devices secure has never been more important. AI’s increasing capabilities and utilization dramatically increase the opportunity for nefarious uses. Consider the dangerous potential of autonomous vehicles and weapons like armed drones falling under the control of bad actors.

As a result of this peril, it has become crucial that IT departments, consumers, business leaders and the government, fully understand cybercriminal strategies that could lead to an AI-driven threat environment. If they don’t, maintaining the security of these traditionally insecure devices and protecting an organization’s digital transformation becomes a nearly impossible endeavor.

How can we ensure safety for a technology that is designed to learn how to modify its own behavior? Developers can’t always determine how or why AI systems take various actions, and this will likely only grow more difficult as AI consumes more data and grows exponentially more complex.

For example, should law enforcement be able to access information recorded by AI devices like Amazon’s Alexa? In late 2018, a New Hampshire judge ordered the tech giant to turn over two days of Amazon Echo recordings in a double murder case. However, legal protections for this type of privacy-invading software remains unclear.

How Should Facial Recognition Technology Be Used?

The latest facial recognition applications can detect faces in a crowd with amazing accuracy. As such, applications for criminal identification and for determining the identity of missing people are growing in popularity. But these solutions also invoke a lot of criticism regarding legality and ethics.

People shouldn’t have to worry that law enforcement officials are going to improperly investigate or arrest them because a poorly designed computer system misidentified them. Unfortunately this is becoming a reality and the consequences for inaccurate facial recognition surveillance could turn deadly.

According to a 2017 blog post, Amazon’s facial recognition system, Rekognition, uses a confidence threshold set to 85 percent and upped that recommendation to a 99 percent confidence threshold not long after, but studies from the ACLU and MIT revealed that Rekognition had significantly higher error rates in determining demographic traits of certain members of the population than purported by Amazon.

Beyond accuracy (and the lack thereof in many cases), the other significant issue facing the technology is an abuse of its implementation — the “big brother” aspect.

In order to address privacy concerns, the U.S. Senate is reviewing the Commercial Facial Recognition Privacy Act, which seeks to implement legal changes that require companies to inform users before facial recognition data is acquired. This is in addition to the Biometric Information Privacy Act of Illinois, which is not specifically targeted at facial recognition but requires organizations to obtain consent to acquire biometric information, and that consent cannot be by default, it has to be given as a result of affirmative action.

As San Francisco works to ban use of the technology by local law enforcement, the divisive debate over the use — or potential misuse — of facial recognition rages on. The public needs to consider whether the use of facial recognition is about safety, surveillance and convenience or if it’s simply a way for advertisers or the government to track us. What is the government and private sector’s responsibility in using facial recognition and when is the line crossed?

How Should AI Be Used to Monitor the Public Activity of Citizen?

The future of personalized marketing and advertising is already here. AI can be combined with previous purchase behavior to tailor experiences for consumers and allow them to find what they are looking for faster. But don’t forget that AI systems are created by humans, who can be biased and judgmental. By displaying information and preferences that a buyer would prefer to keep secret, while more personalized and connected to an individual’s identity, this application of AI technology could evoke sentiments surrounding privacy invasion. Additionally, this solution would require storing an incredible amount of data, which may not be feasible or ethical.

Consider the notion that companies may be misleading you into giving away rights to your data. The impact is these organizations can now detect and target the most depressed, lonely or outraged people in society. Consider the instance when Target determined that a teen girl was pregnant and started to send coupons for baby items according to her pregnancy score. Her unsuspecting father was none too pleased about his high-schooler receiving ads that, in his mind, encouraged his daughter to get pregnant — and he let the retail giant know about it.

Unfortunately, not only are businesses gathering eye-opening amounts of information — many are being racially, economically and socially selective with the data being collected. And by allowing discriminatory ads to slip through the net, companies are opening a Pandora’s box of ethical issues.

How Far Will AI go to Improve Customer Service?

Today, AI is often employed to complement the role of human employees, freeing them up to complete the most interesting and useful tasks. Rather than focusing on the time-consuming, arduous jobs, AI now allows employees to focus on how to harness the speed, reach and efficiency of AI to work even more intelligently. AI systems can remove a significant amount of friction borne from interactions between customers and employees.

Thinking back to the advent of Google’s advertising business model and then the launch of Amazon’s product recommendation engine and Netflix’s ubiquitous “suggested for you” algorithm, consumers face a dizzying number of targeted offers. Sometimes this can be really convenient when you notice that your favorite author has come out with a new book, or the next seasons of a popular show launched. Other times it comes across as incredibly invasive and seemingly in violation of basic privacy rights.

As AI becomes more prominent across the enterprise, its application is a new issue that society has never been forced to consider or manage before. While the application of AI delivers a lot of good, it can also be used to harm people in various ways, and the best way to combat ethical issues is to be very transparent. Consequently, we — as technology developers and manufacturers, marketers and people in the tech space — have a social and ethical responsibility to be open to scrutiny and consider the ethics of artificial intelligence, working to hinder the misuse and potential negative effects of these new AI technologies.

Rob Carpenter is the founder and CEO of Valyant AI, a Colorado-based artificial intelligence company focused on customer service in the quick-serve restaurant industry.

Posted on July 1, 2019June 27, 2019

Could Video Interviewing Land You in Court?

HR tech, spy, monitor

Companies using artificial intelligence to assess video interviews should be aware of a new law on the books.

In May the Illinois Legislature unanimously passed the Artificial Intelligence Video Interview Act, which requires employers to notify candidates that AI will be used to assess their interview, be able to explain what elements it will look for, and secure their consent to do it.

Those that don’t could face future litigation.

The legislation, which is expected to be signed by Gov. J.B. Pritzker this summer, addresses the risk of hidden biases, explained Mark Girouard, a labor and employment attorney for Nilan Johnson Lewis in Minneapolis. “As with any use of AI in recruiting, this law come from concerns about how observations in the interview correlate to business value.”

Also read: Monitor Responsibly: How Employers Are Using Workplace Surveillance Devices

AI assessments of a video interview use machine-learning algorithms that are taught what to look for by studying existing data sets and finding correlations. For example, it might determine that candidates who use certain phrases, or speak at a certain speed, have the right attributes to do well in a role, based on data captured about previous high performers.

Replicating Bias

This is a valuable and efficient way to prescreen candidates, and it can potentially eliminate human bias from the process. However, if the data sets the algorithm learns from are inherently biased, the algorithm can adopt those biases perpetuating the problem, Girouard says. For example, they might identify certain word choices, facial expressions or even skin tone as a consistent theme among high performers, even though those features don’t align with performance.

“If algorithms are trained correctly they shouldn’t replicate bias,” Girouard says. “But if they aren’t they can amplify disadvantage.”

Kevin Parker, CEO of Hirevue, a video interviewing software company that offers AI-driven assessment services, couldn’t agree more.

“We are in full support of this bill,” said Parker, who was invited by lawmakers to provide feedback on its content. He sees it as another way to address privacy and fairness in the recruiting process, and to set quality standards for the entire industry.

Hirevue addresses concerns about bias by including organizational psychologists on the teams that work with customers to first identify interview questions that will uncover the right criteria for success (empathy, problem solving, sociability), then to test those questions against a broad set of data to ensure they have no adverse impact.

Sometimes a problem will emerge, he noted. For example, when companies train algorithms using performance data from a predominantly middle-aged white male employee population, certain factors can introduce bias.

The testing process used to vet the interview questions can identify these biases, then the team will either eliminate the question or reduce the value of factors associated with those measures. “In this way we can neutralize biases before a single candidate is interviewed.”

A Flood of Legislation

While this law has only been introduced in Illinois, it is likely the first of many such laws being proposed as concerns about AI’s impact on recruiting bias grows, Girouard warned. “It is the first drip of what is likely to be a flood of legislation.”

Also watch: Armen Berjikly on Communication Advances in AI

To protect themselves against later litigation, employers should educate themselves on what the law requires, and how they are addressing the risk of AI-driven bias in their current operations. He noted that most employers today can’t explain how the AI assessment works, what criteria they look for or how those criteria align with performance success.

That’s a problem, he said. The law doesn’t just require employers to inform candidates about the technology, they also must be able to describe how the AI tool will interpret the interview and how it will be used in the selection process. “If you can’t explain it, it will be very hard for you to defend it in court.”

Posted on April 30, 2019June 29, 2023

Instant Messaging: The Future of Communication, With Caveats

The days of face-to-face meetings and group emails may soon be coming to an end. From texting job candidates and using Slack for project management, to building artificially intelligent chatbots that answer questions about human resources, communication technology in the workplace is evolving. All of this is a good thing, said Sharon O’Dea, a digital and social media consultant based in the U.K. These tools enable faster, more efficient communication, via the devices employees have in their hands all the time, she said. “We all use instant messaging in our personal lives. It is natural to see that shift into the workplace.”

Younger workers are far more likely to choose text or Slack over email or phone calls, which they view as cumbersome and outdated, said Adam Ochstein, CEO of StratEx, an HR technology and consulting firm based in Chicago. Email can also be tricky for contract workers and frontline staff, who may rarely check their emails but always have their phones. “They want to communicate in real time with their fingers, not their voices.”

The use of instant and automated technology to support communication isn’t going away, so managers need to get on board or risk creating information gaps in the workplace. A 2017 report from Dynamic Signal found that only 17 percent of companies had recently invested in technology for internal communication, even though 73 percent said communicating company information to employees was a “serious challenge.”

While chatbots won’t be taking over the way we engage any time soon, the tools we use are evolving, and skeptical managers need to get on board, Ochstein said. “If you want to be an employer of choice for this generation, you’ve got to adapt.

Conversation Bots

Along with changing how employees communicate, new technologies are also changing what information they can share, said Rob High, chief technology officer for IBM Watson, IBM’s cognitive computing system. “Artificial intelligence tools, at their most basic, improve the likelihood that employees can find and share information as they communicate.” This enables faster problem-solving and ensures they can make decisions based on data, not gut instinct. High envisions a day where AI conversation agents will be the third party in a conversation, automatically searching for information and providing context.

Also read: Meet Your New Colleague: Artificial Intelligence

High’s team has also created the AI-driven IBM tone analyzer, which uses linguistic analysis to examine the emotion in text messages. The goal is to help employees vet the “tone” of texts and emails, just as you might spell-check before hitting send. “It’s an efficient way to reduce misunderstandings,” he said. High believes AI technology will change the way we communicate at work and at home.

O’Dea agreed. “Chatbots offer huge potential for employee communication. They can take over the tasks that are needlessly complicated.” She believes early applications will focus on things like filling out employment forms, requesting days off and accessing personal data. “Chatbots can provide employees with instant access to this information through an app, which is where they spend more of their time anyway,” she said. For those who think chatbots are too inhuman for workplace communication, O’Dea believes it’s the opposite. Many employee communication platforms and corporate emails are “generic and impersonal, but chatbots can have human conversations,” she said.

In an era of social sharing, the casual nature of texts in the workplace can put companies at risk.

The Trouble With Text

The adoption of instant communication in the workplace isn’t all good news. In an era of social sharing, the casual nature of texts in the workplace can put companies at risk. We’ve all read the stories of managers cursing out employees for some minor infraction or flirting in a way that makes someone uncomfortable, only to have those conversations go viral and result in someone getting fired.

“There is a fine line between casual conversations and inappropriate content, and instant messaging makes that line very easy to cross,” StratEx’s Ochstein said. It’s rarely intentional. He recalls a recent day at his own company when employees were using Slack to discuss whether the company’s “no-shorts policy” should be abandoned when temperatures rise above 90 degrees. That evolved into a conversation about why female employees were lucky because they can wear skirts, which led to a “guys vs. girls in the workplace” battle. “That’s when the HR team had to get involved and shut it down,” he said. “It was innocent banter, but all of [the] sudden it was going in a direction no one wanted.”

Such scenarios are all too common, particularly when teams work long hours together or are out celebrating a project success. “One person may think a text is funny, where the other thinks it’s inappropriate,” he said. “But once you send it, you can’t get it back.”

The instant nature of these tools also creates legal issues with hourly gig workers. If a manager sends an email at night, it is assumed a contractor will respond the next day, but if they send an instant message the implication is that they expect an instant response. “Does that mean you have to pay them for that time?” Ochstein pondered. “Once you cross that chasm, the legal stuff can get bad.”

That doesn’t mean companies shouldn’t use instant messaging apps to interact with employees, but they should define clear policies for their use. Ochstein advised “over-communicating” to employees about texting protocol and reminding them that anything they say on text is as admissible as any other document. He also urged HR leaders to promote a culture of caution. “Encourage them to pause and think about whether a message could be construed as not respectful,” he said. “If there is any chance it could be construed as rude or not respectful, don’t send it.”

Posted on January 4, 2019June 29, 2023

AI is coming — and HR is not prepared

AI in HR, artificial intelligence

The future of work will be driven by artificial intelligence, and HR is woefully ill equipped to make it happen — at least according to many reports about AI and HR.

IBM, PWC and Deloitte (among others) have all done surveys on AI’s impact on HR in the last 18 months, and the message is clear: companies want AI, but they don’t have the talent, leadership or confidence in their human resources team to make it happen.

IBM predicts that 120 million workers in the world’s 10 largest economies will need to be reskilled in the next few years to adapt to an AI-driven marketplace — and that if companies don’t get started soon they will quickly risk losing their competitive edge. Yet its “Unplug from the past” report found that just 28 percent of CHROs expect their enterprise to address changing workforce demographics with new strategies.

Even if companies are gearing up for an AI reskilling evolution, roughly half of their employees don’t think they can pull it off. A global study by Harris Insights in collaboration with IBM found that while more than 80 percent of employees in the U.S. and UK believe having AI skills will be a competitive advantage for their companies, 42 percent said they don’t believe their HR departments can execute it.

Deloitte’s “2018 Global Human Capital Trends” report showed a similar lack of confidence. It found that while 72 percent of respondents think adopting AI is important for their business, only 31 percent feel ready to address it. And research from PWC shows 63 percent of companies are rethinking the whole role of their human resources department in light of the impact AI will have on the business.

Part of the problem is HR’s historic lack of experience with data and analytics, said David Mallon, chief analyst for Bersin by Deloitte. “Every other part of the organization is accustomed to using data to support decisions, but not HR,” he said. “They lack data fluency.”

HR’s evolving role

But things need to change. If HR leaders want to stay relevant (and employed) they need to start thinking more strategically about their roles, said Chris Havrilla, VP of HR technology at Bersin by Deloitte. “They need to shift their mindset to be more data driven, and to see themselves as human teachers for the machine,” she said.

That starts with a change in culture, where data is used to make decisions about people in the same way other departments use data to track finances or manage the supply chain. “The notion that data should inform people decisions is new for a lot of companies,” she said.

HR also needs to think about how that data will help them reskill the workforce for an AI-driven future, said Amy Wright, managing partner of talent and transformation at IBM.

For example, HR leaders will have to reassess how they deliver training to employees and alert them to their own learning needs. “Employees are used to a personalized approach in their consumer lives and they want that in the workplace,” Wright said.

They don’t want to be given a list of full-length courses that may help them learn new skills. They want short, easy-to-consume learning nuggets that have been curated to teach them exactly what they need to know, when they need to know it. “AI-driven training platforms can deliver that personalization,” Wright said.

Also read: For Better or Worse, Artificial Intelligence for Talent Management Has Arrived

AI can also help HR to identify which employees might be best suited to be upskilled for new AI roles, to identify the actual skill gaps they have, and to customize a learning and development path based on others who’ve moved through the organization.

Do something

This transition won’t be easy. It will require HR leaders to upgrade their own skill sets while simultaneously upskilling their workforce and changing how the business functions.

It may sound overwhelming, but it doesn’t have to be, according to Wright. The key is to get started. “Don’t feel like you have to build an entire AI roadmap and plan everything out. Just pick a business problem in one unit and pilot a solution,” she said.

Starting small will allow HR to either fail fast or prove the benefits of AI — and their own ability to leverage it — which will help them win over stakeholders and bolster the workforce’s confidence in their ability to navigate this digital transformation.

“HR can be the growth engine of the organization,” Wright said. They just have to prove they can get it done.

Posted on November 9, 2018September 5, 2023

Meet Your New Colleague: Artificial Intelligence

communication with artificial intelligence

Artificial intelligence is increasingly people’s interviewer, colleague and competition. As it burrows its way further into the workplace and different job functions, it holds abilities to take over certain tasks, learn over time and even have conversations. Many of us may not even be aware that who we’re talking to isn’t even a “who” but a “what.”

In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the “Outlook on Artificial Intelligence in the Enterprise 2018” report from Narrative Science, an artificial intelligence company, in collaboration with the National Business Research Institute. In the communication arena, 43 percent of these businesses said they send AI-powered communications to employees.

Many candidates don’t even realize that they’re not speaking to a human, according to Sahil Sahni, co-founder of computer software company AllyO, which uses an AI-enabled chatbot to speak to candidates and answer questions in the recruiting process.

Based off data from AllyO’s applicants, he found that less than 30 percent of candidates think that they’re speaking to something not human. The other 70 percent either did not disclose what they thought or believed there’s a person behind that chatbot.

AllyO does not disclose up front to the candidate that they are not speaking to a human. However, if they were to ask outright if they are speaking to a person or an AI-enabled chatbot, the system discloses that information. “The goal is not to goof anyone here. The goal is to have the best candidate experience. Lying about it is not the best candidate experience,” Sahni said.

communication with artificial intelligence
In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the “Outlook on Artificial Intelligence in the Enterprise 2018” report from Narrative Science.

Candidates don’t behave differently when speaking to an AI as opposed to a human, Sahni added.

“When you’re a job seeker, it’s not like you’re calling customer service to complain about something. You’re at your best behavior,” he said. “You tend to be a lot more tolerant, you tend to be a lot more respectful, no matter what the process might be.”

Dennis R. Mortensen, CEO and founder of New York-based technology company X.ai, also has access to conversations between people and machine agents, and his team spent the past four years assembling a data set of more than 10 million emails on these dialogues. Their findings have similarly found that people don’t communicate differently just because they’re speaking to a robot.

Giving X.ai’s own personal assistants Amy and Andrew as an example, he said, “It would be very easy to imagine that I will treat them like machines and remove any level of emotion otherwise applied to a traditional conversation with a human, or that the system as a whole would not leave any room for empathy toward the machine. I am happy to say that it is not the case.”

This is not to say that everyone treats a machine with respect. If people tend to be more aggressive or rude with a real person, that same communication style can be seen in how they converse with a machine. The same trend goes with people who are neutral or overly friendly in how they speak to others.

communication with artificial intelligence

Also read: Artificial Intelligence, Automation and the Future of Talent Acquisition

How potential employees actually speak to AI is a different conversation than how potential employees should speak to AI, he added. That is, it’s unclear whether how a person treats a machine says anything about how that person would treat other people, and it’s unclear whether something like a person being rude to a machine agent should impact their job prospects.

“We can certainly agree that we do care if it’s a human recruiting coordinator,” Mortenson said. But machines have no feelings or emotions and cannot be offended, so it would be easy to argue why employers shouldn’t care. Ultimately, “I do think we should care even if it is a machine,” Mortenson said. “I understand why we might care a little bit less, but I don’t think we can just discard that as a signal.”

He gave the example of a report which found that this technology could have implications on how kids learn how to communicate and teach them that speaking harshly or impolitely to people has no consequences.

“In real life there’s a penalty to being an asshole,” Mortensen.

Limits and Capabilities of AI in the Hiring Process

Machine learning allows AI to gain knowledge over time and learn from its interactions, much like a person would. That being said, even though it has the ability to mature in its own way and become more humanlike over time, that still doesn’t make it human, and there are certain questions that a person might have to answer, for example, questions about company culture, according to Sahni.

AI systems are capable of taking this into account. For example, AllyO can recognize when a candidate asks a question that cannot be answered by a machine and brings in a person who can answer that question, Sahni said. This way, the candidate can have a positive experience and not feel like they’ve lost out by not speaking to a real person.

“If the process is objective, AI knocks it out of the park. If the process has any subjectivity to it, AI does really well looping in the hiring team,” he said. “A good AI system typically has human support behind it.”

Much like people themselves, AI has the potential for bias, according to Eric Shangle, director of people operations at AI platform Figure Eight, based in San Francisco. For example, Wired reported in July 2018 that Amazon’s facial recognition software system Rekognition confused many black members of Congress with publicly available mugshots and that facial recognition technology’s problem in detecting darker skin tones is a well-established problem.

One reason why a tool may be biased is training data bias, Shangle said. From the developmental side of machine learning, the creator of a tool must input a data set to train the algorithm, and if it does not use a diverse data set, then an employer using the tool may come across bias blind spots.

“What are the biases of this tool?” is a legitimate question for employers who are looking to purchase a machine learning tool such as facial recognition software, Shangle said. A recruiting tool may, for example, have a bias toward college-educated job seekers.

David Dalka, founder of Chicago-based management consulting company Fearless Revival, agrees that AI has its limits. He has a more traditional view of what recruiting should look like, arguing that companies should invest less in technology and more in human recruiters who work at the company long-term, know the company culture and know what kind of person would be a best fit for the job, rather than look for trendy keywords or job titles in résumés.

“I’m not opposed to AI tools if someone built the full data library of all the factors and stopped focusing trivially on things like job titles,” he said.

He suggested that companies should more carefully consider the attributes that matter in a candidate — Do they read any books? Are they naturally curious? What are their skills and degrees? — and consider how they would weigh these attributes in an AI system. Ultimately AI is simply a tool that analyzes content.

“This idea that some wizard will magically create this black box that will hire the right people without you thinking of these things is a fallacy,” Dalka said.

This article originally appeared in Talent Economy.


 

Webinars

 

White Papers

 

 
  • Topics

    • Benefits
    • Compensation
    • HR Administration
    • Legal
    • Recruitment
    • Staffing Management
    • Training
    • Technology
    • Workplace Culture
  • Resources

    • Subscribe
    • Current Issue
    • Email Sign Up
    • Contribute
    • Research
    • Awards
    • White Papers
  • Events

    • Upcoming Events
    • Webinars
    • Spotlight Webinars
    • Speakers Bureau
    • Custom Events
  • Follow Us

    • LinkedIn
    • Twitter
    • Facebook
    • YouTube
    • RSS
  • Advertise

    • Editorial Calendar
    • Media Kit
    • Contact a Strategy Consultant
    • Vendor Directory
  • About Us

    • Our Company
    • Our Team
    • Press
    • Contact Us
    • Privacy Policy
    • Terms Of Use
Proudly powered by WordPress