Skip to content

Workforce

Tag: data privacy

Posted on April 14, 2020June 29, 2023

Regulating recruiting amid constant technological innovations

recruiting, hiring, interviewing a candidate

As recruiters adopt advanced technologies in their quest to identify, court and hire candidates, attorneys are looking into the legal and regulatory issues those new tools may bring into play.

Lawyers, recruiting experts and technology vendors say legal teams are examining compliance concerns even as their colleagues in HR and IT evaluate products that leverage artificial intelligence, machine learning and other innovative approaches. Not only are they exploring the ramifications of privacy requirements such as Europe’s GDPR, they’re considering the possible impact of biases that may be inherent in a data set or unwittingly applied by algorithms.

recruiting, hiring, talent acquisition “I think we’re at the beginning of sorting out what all this means, but I think it’s definitely something people are thinking about,” said Jeffrey Bosley, San Francisco-based partner in the labor and employment practice of law firm Davis Wright Tremaine. “It’s a new technology and it’s evolving. Whenever you have a new technology, you do have growing pains and you do have these issues that come up,” he said.

Advanced technologies have gotten much attention recently, particularly as people inside and outside the business world consider the impact AI may have on jobs and livelihoods. At the same time, some well-intentioned efforts have generated media coverage for results that were diametrically opposed to what their developers set out to do.

In 2018, for example, Amazon abandoned an effort to build a machine-learning tool for recruiters after the system proved to be favoring men over women. According to Reuters, the tool downgraded resumes that included the word “women’s” as well as the graduates of two all-women’s colleges.

Also read: Is there room for an ethics code for tech companies?

Sources inside Amazon said the system, which had been under development since 2014, was meant to review resumes so recruiters could spend more time building candidate relationships and actually hiring people. It worked by comparing applicants against patterns found among resumes the company had received over a 10-year period. However, it didn’t account for the dominance of men in the technology workforce. As a result, the system machine-taught itself that male candidates were stronger than females.

Advanced technology “is at an awkward stage where it’s not really intelligent,” said William Tincup, president of the industry website RecruitingDaily.com. While he sees great potential for AI and other tools to streamline the work of recruiters and even address bias in the hiring process, he believes systems are limited in how much they can accomplish.

Why? In a word, people. “What are machines learning from their learning from humans?” Tincup asked. Hiring managers can’t help but operate with a number of possible preconceptions in their minds, from unconscious bias about race or gender to a preference for the candidate they most recently interviewed or who seems the most like themselves. Such biases, Tincup observed, live on in the makeup of a company’s existing workforce. And that leads to the troubles Amazon faced, where the data set reflects decisions made in the past more than it positions a process to understand needs of the future.

Technology Races Ahead

The situation is complicated by the idea that technology has outpaced legal and business practices. While they believe that will eventually change, analysts and technology vendors don’t see it changing quickly. 

“Right now, technology’s moving super-fast,” said Ankit Somani, co-founder of the talent acquisition and management platform AllyO, headquartered in Palo Alto, California. “Generally, regulators and the folks who control compliance standards don’t move so quickly. But, honestly, we’re like three lawsuits away from somebody taking it very seriously.”

Also read: Artificial intelligence is a double-edged sword. Here’s how HR leaders can properly wield it

 “Therein lies a real big rub,” Tincup said of regulation’s lag behind talent acquisition and HR practices. Nearly all of the processes involved with turning candidates into employees touch some kind of employment law or EEOC-related issues, but “all of those rules are outdated,” he said. “We’ve been working outside of the rules for 15 or 20 years. I would argue that there isn’t a company in the United States that’s 100 percent compliant from sourcing to outplacement.”

Talent acquisition teams, and HR in general, understand that and are beginning to adopt, said Brian Delle Donne, president of Talent Tech Labs, an industry analyst and consulting firm based in New York. However, he believes determining exactly how and where compliance fits in with the use of new technologies has been complicated by the way “artificial intelligence” has been “grossly generalized” in industry conversations.

“Most of the time they’re talking about machine learning, or sometimes just automated workflow processing,” Delle Donne said. “When you get into true artificial intelligence, where the machine is making decisions, it’s a higher threshold that’s required for our concern about the accuracy of [its] recommendations and predictions.” The distinction between true AI and what might be called “advanced technology” is important, he believes, because people assume that the machine is prescient when it’s usually not. “In most cases, it will be quite a while until machines are actually making decisions on their own,” Delle Donne observed.

Even in today’s state, the use of advanced technology has become widespread enough to raise concerns about whether it might, inadvertently, nudge an employer out of compliance. For example, AI-driven tools may use personal information in unplanned ways that a candidate hasn’t given permission for. That would raise privacy concerns. Or, tools might present results that, intentionally or not, run afoul of fair-employment legislation. “On both fronts, you’re talking about compliance statutory norms,” said Delle Donne.

AI’s Behavior

Such concerns, along with widespread speculation about AI’s impact, has made advanced technology “front of mind for many people,” said Bosley. In response, governments at all levels have begun generating “a patchwork” of laws that sometimes conflict with one another.

For example, Illinois’s Artificial Intelligence Video Interview Act went into effect Jan. 1, 2020. The law sets out transparency and consent requirements for video interviews, as well as limits on who can view the interviews and how long they can be stored. However, Bosley said, the law’s mandate to destroy videos within 30 days may conflict with the preservation requirements of other state and federal laws, including in the Civil Rights Act of 1964 and the Americans with Disabilities Act.

Also read: How Will Staney continues to change the talent acquisition game

“It puts employers in a position where they’re really going to need to assess risk,” Bosley said. “They’re going to need to come up with creative solutions to try and work around some of this risk.” 

Not all employers may feel exposed in the near term, Tincup suggested. He estimates that each year only a handful of legal actions are taken because of a candidate’s unhappiness with the recruiting process. People practices, technology practices and civil and social discourse are “way ahead of employment law,” he explained. “So is this something that’s going to create an immense amount of risk? No.” Employers today, he believes, put themselves at more risk by hiring a salesperson with a history of sexual harassment. In that regard, “you could spend more money in risk mitigation … than in recruitment technology,” he said.

At the same time, an organization’s risk may be based on activities that aren’t related to recruiting or the workforce, Bosley points out. “This isn’t just a human resources issue anymore. It’s not only an employment law issue anymore. It’s much broader than that,” he said. “You have data protection, data compliance, privacy and the potential for disparate impact claims as opposed to disparate treatment claims.”

Bosley anticipates more claims will be filed that look into a database’s contents, what data’s being looked at, how it’s being processed and whether algorithms are static or refined over time. Essentially, these claims will examine how advanced technology is making its decisions. “It’s going to be something where human resources leaders are looking to involve others in the organization and make sure that they’re both issue-spotting and getting ahead of some of these compliance issues,” he said.

 Indeed, Somani believes this notion of “explainability” — laying out what a system does and how it’s doing it — will become more important in the realms of recruiting technology and compliance. “There should, in my mind, be more compliance standards around that,” he said.

Evolving Standards

Even at a basic level, compliance standards for using technology in recruiting “don’t exist,” Somani said. For example, does texting about a job opportunity constitute a form of marketing? Is such a text permissible if it’s personalized? Because the answer’s not clear, he believes many companies are putting stricter guidelines in place.

Somani also said legal departments are becoming more involved in the purchase and implementation of recruiting technology. For tools handling communications, such as those that facilitate SMS messaging between recruiters and candidates, they’re trying to anticipate issues by creating policies that cover not only privacy, but data collection and permissions. “It’s an explicit ask in almost every deal we go into: ‘If a consumer doesn’t want to interact with your system, how do you follow that?’ ” he said. When it comes to issues related to AI’s under-the-hood work, vendors focus on transparency and disclosure by presenting disclaimers on their product or within their privacy policies.  

 For enterprises, compliance issues “can be a deal-breaker,” said Megan Gimbar, the Holmdel, New Jersey-based product marketing manager for iCIMS Hiring Suite, at least at the corporate level. While compliance and consistency are important components of her product, she said, talent acquisition teams often shy away from the topic.

In the past, employers tried to ensure compliance through training. Their approach, said Delle Donne, was to make hiring managers aware of interview questions that shouldn’t be asked (such as inquiring whether a woman intended to have children) or information that shouldn’t be considered (the candidate’s age or ZIP code). “That’s a fairly low bar,” he observed.

The bar began getting higher “once we started saying algorithms are going to make that determination for us,” Delle Donne continued. “Algorithms might actually do a better job, [or] may actually be set up in a way that they might do a better job, than humans do at avoiding compliance issues through bias.” However, he said, that requires planning and a focus on non-discrimination features when algorithms are designed.

Also read: The ethics of AI in the workplace

Compliance Further Afield

The compliance issues raised by using AI in recruiting aren’t limited to talent acquisition alone. For one thing, Somandi notes, recruiters today leverage a variety of tools that were introduced into other functions. 

Think of how candidate management systems and customer management systems align. When using those technologies, compliance may involve adapting the standards used by marketing or sales so they can be applied to talent acquisition and HR.

That road goes both ways. Even solutions designed for recruiters raise issues that aren’t unique to hiring, Delle Donne said. “As HR tries to digitize, there are many, many places where technology can streamline processes and save time and perhaps be more beneficial to the employee or the party,” he said. Many, if not all, of those will lead to some kind of compliance question. For example, a bot used in benefits administration may build a profile of confidential medical information. Or, a learning program might enter performance scores into an employee record without informing the employee. That could be a problem if those scores impact a person’s future promotions or career path.

As it digitizes, the tools implemented by HR “will bring in these technologies and there’s going to have to be some focus or some attention given to not inadvertently creating bias or discrimination, or revealing private information,” Delle Donne said. “If you take a step back, it just could be like whack-a-mole. I mean, ‘Hey, we see it over here in talent acquisition. Let’s go chase that down and… Oh, wait. We just saw this going on over there.’”

Scheduling employees is one major HR task for which technology can help. Make more accurate, data-driven scheduling decisions in just a few clicks with Workforce.com’s comprehensive scheduling software.

Posted on April 8, 2020June 29, 2023

Maneuvering the complicated intersection of data privacy, health and technology

health data privacy; privacy laws; data protection

People share their experiences with depression on Twitter to show support for the mental health community. They join private Facebook groups to discuss similar health issues, without realizing that a “private” online group does not actually offer privacy protections. Companies encourage employees to be open about their health in an effort to create a “culture of health.” And employees join “HIPAA-compliant” wellness programs without realizing that the health data they log in various apps may not be protected by any law if the program is voluntary. 

When the Health Insurance Portability and Accountability Act was enacted in 1996, today’s vast digital space didn’t exist. Even if organizations comply with HIPAA, the Genetic Information Nondiscrimination Act and other laws that protect health-related data, that doesn’t necessarily mean the data is protected in many contexts. There are gaps that have yet to be legally addressed. Meanwhile, employees increasingly share health information on digital health apps or online.

Also read: Fingerprint scanners risky amid coronavirus pandemic — it’s a touchy subject

A vast amount of employee data is not legally protected. As collectors of employee data, employers should be aware of the health data privacy landscape and the concerns employees may have.

“As much as it pains me to say, [data privacy] is probably nobody’s top priority,” said data privacy attorney Joseph Jerome. “It only becomes their priority when something goes wrong or they get concerned or they hear something in the news.”

Employers in the U.S. and internationally have increasingly more data privacy regulations to pay attention to — as laws like the General Data Protection Regulation in the European Union and the California Consumer Privacy Act and Illinois Biometric Privacy Act in the U.S. move the data privacy legal environment forward. In this constantly changing world, there’s information that can help organizations navigate this complicated intersection more intelligently. 

Also read: How much do you know about your health data privacy?

Privacy Law Limitations

There is a lack of understanding of what HIPAA protections apply where, when and to what data, Jerome said. At its core, HIPAA was enacted to facilitate the portability and interoperability of health care records, not for any greater data privacy reason. “We act like this is a health data privacy law, but no. It’s designed to govern data in hospital systems,” he said.

what is health data?Employers want to learn increasingly more data about their employees, he said. They have the opportunity to do so through commercial apps that capture wellness and fitness data. “These are things that people perceive as health data, but they’re not covered by HIPAA, and they were never designed to be covered by HIPAA,” he said.  

HIPAA — and therefore what data is considered health information — is limited to covered entities like hospital systems and doctors’ offices. For example, within a health system, a patient’s email address is considered health information under HIPAA, but outside the health system, an email address is not considered health information and does not get HIPAA protection.

HIPAA also doesn’t apply to anonymized data —  the data remaining after being stripped of personally identifiable information from data sets, so that the people whom the data describe remain anonymous.

Further, anonymous data is fair game, legally. “There is no regulation of ‘anonymized’ data. It can be sold to anyone and used for any purpose.The theory is that once the data has been scrubbed, it cannot be used to identify an individual and is therefore safe for sale, analysis and use,” noted “Re-Identification of ‘Anonymized’ Data,” a 2017 Georgetown Law Technology Review article.

A concern here is that anonymous data can be easily re-identified, and it’s tough to hold bad actors accountable for doing so, Jerome said. Further, it’s hard to do anything about it once the data is already identified and public information. Unfortunately, there are realistically not enough reinforcement resources, he added.  

“That’s a real problem right now, not just in health care or employment context, but you’ve got this giant ecosystem where a lot of companies are sharing information and they’re all saying they’re good actors, they’re all saying they’re not re-identifying information, they’re all saying they’re not even using personal information,” he said. “But there’s data leakage all over the place. People are recombining profiles, and it’s very hard to attribute where the information originally came from.”

According to the Georgetown Law Technology Review article, the re-identification of anonymous data can lead to sensitive or embarrassing health information being linked to one’s employer, spouse or community. “Without regulation of re-identified anonymized data, employers, neighbors, and blackmailers have an unprecedented window into an individual’s most private information,” the article said. One of the privacy concerns some people have about their health data is that it could eventually be used against them and that they could suffer real-world implications like the loss of job opportunities, the denial of insurance or higher premiums for insurance. 

Also read: Do Employers Have a Duty to Protect Employees’ Personal Information?

Wellness Program Gaps 

The idea behind employee wellness programs is supposed to be a win-win, said Anya Prince, associate professor of law and member of the University of Iowa Genetics Cluster. Employees get healthier, and employers get lower health care costs and a more productive workforce. 

But wellness programs are often not effective at changing employee health, she said. 

“If the premise is we’re doing this to benefit employees [but] there’s not actually evidence that it’s benefiting employees, the question then becomes why are [wellness programs] continuing to happen?” she said. “The evidence shows that what they’re doing is shifting health care costs back on to employees in various ways. That’s where the concern comes in.” 

Digital health apps on employees’ phones play a part in many workplace wellness programs. But even though third-party health apps are common on people’s phones, the privacy landscape behind these apps is murky at best. 

Prince cited Lori Andrews, professor of law at the University of Chicago and director of Illinois Tech’s Institute for Science, Law and Technology. Andrews has conducted work on the types of data that medical apps collect from users, including employees in workplace wellness programs.

“Some of the medical apps are just completely bogus and don’t give you anything helpful back,” Prince said about the general health data privacy environment. “But they are collecting data on you, not just health information but geolocation and other data that’s worth money.” 

health data privacy; privacy laws; data protectionAnother trend in wellness programs is employers offering employees consumer-directed genetic tests to help them understand what medical issues they may be predisposed to and what preventative measures they can take to combat them. According to the Society for Human Resource Management, 18 percent of employers provided a health-related genetic testing benefit in 2018, up from 12 percent in 2016.

Many studies have shown that people are not aware of  the Genetic Information Nondiscrimination Act or what privacy protections they have through the law, Prince said. “GINA is quite protective in employment in the sense that employers are not allowed to use genetic information to discriminate, so they can’t make hiring, firing, promotion, wage, any decisions based on genetic information,” she said, adding that genetic information includes family medical history, genetic test results and more.

Still, she said, there are some exceptions with GINA, including private employers with fewer than 15 employees and any employee in a voluntary wellness program. 

There is currently a legal debate on whether wellness programs are voluntary or if employees feel coerced to join them, Prince said. Some wellness programs are participatory —  meaning that employees don’t need to hit a certain health outcome target to earn the incentive — but others are health contingent. Employees need to lose some amount of weight or accomplish another target measurement to get the financial benefits of the wellness program.

These programs are more participatory currently, she said. But if programs that collect genetic information become  health contingent, that could bring up ethical issues and become more invasive. 

“If you think of [Breast Cancer gene] testing, which is a predisposition to breast and ovarian cancer, one of the preventive measures right now is to prophylactically remove your breast and ovaries. My dystopian future is the employer saying, ‘Have you finished having kids yet? Get on that, so that you can remove your ovaries,’” she said.  

This discussion begs the question of who is ultimately the best actor to push people toward better behaviors and health outcomes, she said. Society has to ask if employment is the best place to do this. 

“In a way the answer is yes because we’ve created a system where health insurance and employment are so intertwined, but maybe employment isn’t the right space to be encouraging people to make the right health choices,” she said. “Maybe that should be a public health system or your primary care physician or researchers.” 

The Pentagon has advised service members not to engage in 23andMe genetic tests, said Glenn Cohen, professor of law at Harvard Law School, and faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics. 

There’s a major national security reason for this, he said, but part of the reasoning also has to do with protecting service members’ privacy. The military is exempted from GINA, which is the law which prohibiting genetic discrimination by employers. 

Also read: The Ethics of Artificial Intelligence in the Workplace

Consent, Transparency and Communication 

Employers could communicate with employees better, Jerome said. Privacy is more than just legal compliance, which may include a disclaimer in the company handbook or on the employees’ computers that inform them “All this can be tracked and monitored.” This can help set up the expectation for employees that they should have no expectation of privacy in anything they do at work. 

While most employers have done their legal duty, they’ve yet to have a conversation with employees about what they’re actually doing with this data, Jerome said. 

Also read: Are You Part of the Cybersecurity Solution … Or Part of the Problem?

“I get that those conversations can be difficult and uncomfortable and frankly might get employees riled up, but I think that’s probably a good thing in the end,” he said. 

Employers — who sit on large troves of employee health data — may have the legal right to share data, but that doesn’t mean employees and other parties won’t criticize them, said Cohen. “They have to be worried a little bit about how it’s going to play as a PR matter and, in an industry where they’re competing for talent, how employees feel about [it],” he said. 

When Ascension Health partnered with Google for the “Project Nightingale” initiative late last year — allowing the tech company access to the detailed personal health information of millions of Americans — it received a lot of backlash. It could be dangerous for an organization like Google, which already has so much of people’s personal data, to get access to people’s health records as well, critics argued. Supporters said it was perfectly legal.

“My recommendation in general is even if you legally have the right to share the data, you may want to think about creating some internal governance mechanisms that have employees involved in trying to decide what gets shared or not,” Cohen said. 

Practically, this could mean that the organization charters a committee that includes employers, employees and subject matter experts who can explain both the uses and the risks of adopting a certain solution, he said.

This could be a valuable decision for employers because better decisions get made and it’s better for the employer’s reputation, he said. When people find out a company has sold its employees data, it could look bad if there hasn’t been employee input in the decision. 

For most organizations dealing with health data and other personal data, their reputation is based on how they treat that data, said Ed Oleksiak, senior vice president at insurance brokerage Holmes Murphy. A data breach or misuse of data would be bad press, so the company would be incentivized to protect that data and ensure it’s used properly

When there is a health data mishap, there are a couple ways that organizations can address that breach of trust, he said. Organizations can provide impacted employees some kind of identity theft protection that will help them mitigate any harm. Further, the company is required to address whatever has resulted in the breach and do whatever it can to make sure it can’t happen again in the future. 

“Whether it’s the employer’s health plan, a hospital system, or a technology provider, everybody’s reputation is contingent on successfully mitigating that,” Oleksiak said. “You just have to start over again, and try to fill that cup of trust back up.” 

Oleksiak also suggested that employers follow a key tenet of only getting and storing the minimum necessary data. Even though people involved with employee health plans most likely want to use patient data for the right reasons, people who can hack into these systems can access everything, including more unnecessary data. 

Ultimately, this is an issue of balance. According to the aforementioned Georgetown Law Technology Review “Re-Identification of ‘Anonymized’ Data,” “data utility and individual privacy are on opposite ends of the spectrum. The more scrubbed the data is, the less useful it is.” 

Still, there are positive things companies can do with this data, Oleksiak said. No matter what privacy rules and regulations are put in place, a bad actor is going to find a way to do something that’s for their own benefit.

“Hopefully we write rules that go after people that abused their position or access to data, but still allow everybody else that’s doing it for the right reasons to get the job done,” he said.

Posted on June 27, 2019February 25, 2022

Do Employers Have a Duty to Protect Employees’ Personal Information?

data analytics, data privacy

Employees trust their employers with a whole bunch of personal information. Social security numbers, medical documents, insurance records, birth dates, criminal records, credit reports, family information, etc. And it’s not like employees have a choice over whether to disclose and entrust this information to their employer. These documents are all necessary if employees want to get hired, get paid, and obtain health insurance and other benefits. Thus, an employer’s personnel records are a treasure trove of PII (personally identifiable information — any data that could potentially identify a specific individual, which can be used to distinguish one person from another and de-anonymizing otherwise anonymous data).

For this reason, cyber-criminals target myriad businesses in an attempt to steal (and then sell on the dark web) this data.

Also in Legal: Biometric Privacy Lawsuits Rising

If a company is hacked, and employees’ PII or other data is stolen, is their employer liable to its employees for any damages caused by the data breach?

I’ve covered this issue twice before (here and here), with different courts reaching opposite results (albeit the majority of them concluding that an employer can be held liable).

In AFGE v. OPM (In re United States OPM Data Sec. Breach Litig.), the D.C. Circuit Court of Appeals recently addressed a similar issue, and concluded that employee-victims have standing to sue their employer following a data breach from which their personal information and data is stolen. A “substantial risk of future identity theft” is sufficient harm to give rise to a lawsuit, and the “their claimed data breach-related injuries are fairly traceable to [their employer’s] failure to secure its information systems.”

All of these cases are legally interesting, and, I submit, largely practically insignificant. Regardless of whether you, as an employer, have a legal duty to protect the personal information and data of your employees, you still have a significant financial and reputational incentive to take reasonable steps to maintain the privacy and security of the information.

Moreover, as data breaches continue to increase in quantity and quality, courts and legislatures will look for ways to shift the cost of harm to those who can both better afford it and better take measures to hedge against them. Thus, I predict that in five years or less we will have a legal consensus on liability.

The question, then, for you and your business to answer is what are you going to do about it now? The time to get your business’s cyber-house in order is now (actually, it was years ago, but let’s go with now if you’re late to the game). Don’t wait for a court to hold you liable to your employees (and others?) after a data breach.

Thus, what should you be doing?

  1. Implementing reasonable security measures, which includes encryption, firewalls, secure and updated passwords, and employee training on how to protect against data breaches (such as how not fall victim to phishing attacks).
  2. If (or more accurately when) you suffer a data breach, timely advising employees of the breach as required by all applicable state laws.
  3. Training employees on appropriate data security.
  4. Drafting policies that explain the scope of your duty as an organization to protect employee data.
  5. Maintaining an updated data breach response plan.

Remember, data breaches are not an if issue, but a when issue. Once you understand the fact that you will suffer a breach, you should also understand the importance of making the issue of data security a priority in your organization. The average cost to a company of a data breach in 2018 is $3.9 million (and increasing annually). While I generally don’t work in the business of guarantees, I will guarantee that any expenses you incur to mitigate the potential cost of a data breach is money well spent.

 


 

Webinars

 

White Papers

 

 
  • Topics

    • Benefits
    • Compensation
    • HR Administration
    • Legal
    • Recruitment
    • Staffing Management
    • Training
    • Technology
    • Workplace Culture
  • Resources

    • Subscribe
    • Current Issue
    • Email Sign Up
    • Contribute
    • Research
    • Awards
    • White Papers
  • Events

    • Upcoming Events
    • Webinars
    • Spotlight Webinars
    • Speakers Bureau
    • Custom Events
  • Follow Us

    • LinkedIn
    • Twitter
    • Facebook
    • YouTube
    • RSS
  • Advertise

    • Editorial Calendar
    • Media Kit
    • Contact a Strategy Consultant
    • Vendor Directory
  • About Us

    • Our Company
    • Our Team
    • Press
    • Contact Us
    • Privacy Policy
    • Terms Of Use
Proudly powered by WordPress