If you rely on social media to paint for you a full and complete picture about your job applicants and employees, you are going to be very disappointed.
According to a recent survey, 43 percent of workers use privacy settings to keep material hidden from employers, and 46 percent have searched for their own names and taken further measures to conceal their social media presence based on what they found.
What types of information are they hiding?
70 percent are shielding their personal lives.
56 percent their unprofessional behavior.
44 percent their political views.
50 percent have deleted entire profiles or old posts to protect their professional reputations.
What do these numbers tell us about social media background checks? That you cannot rely on them as your lone pre-hire check of employees.
Yes, there is lots of valuable information you can discover on social media about a prospective employee: how they present themselves; whether they post inappropriate photos, videos, or statements; if they are sexist or racist; are they good communicators; do they have good judgment.
But, if candidates are hiding this information behind privacy settings, or deleting it altogether, then if you only rely on social media, then you are missing most or all of the relevant information. If you want to check a candidateâs background on social media, do it only as part of a more holistic screening process that includes a more traditional background check.
If you havenât started your own super-successful HR technology company itâs not too late.
Venture capitalistsâ love affair with HR tech firms is on track to break records as they dole out millions of dollars to entrepreneurs who promise to transform the human resources landscape. According to HRWins by LaRocque LLC, venture firms invested $1.741 billion in HR tech companies in the first quarter of 2019 and $1.448 billion in the second quarter.
The first quarter alone was âsignificantly more than any quarter in 2018, and $677 million more than we tracked in all of 2017,â according to LaRocque. And Jason Corsello, founder and general partner of Acadian Ventures, an early-stage venture capital firm specializing in the future of work, predicts that HR tech deals will hit $5 billion in 2019.
The continued investment interest in this space makes sense. Despite years of VC investment into promising HR tech companies, there are still a lot of problems that current vendors havenât solved, like:
How can we recruit strong candidates when unemployment rates are so low?
Why does our candidate experience still lag despite our cool new interactive recruiting page, YouTube recruiting channel and automated email response tools?
Can I hire freelancers instead of full-time staff, and where do I find them?
How are we supposed to reskill an entire workforce when we donât know what skills they are going to need?
These are big, difficult questions and VCs are eager to support entrepreneurs who claim to have the answers particularly because the market is strong, said David Mallon, chief analyst at Bersin, Deloitte Consulting LLP. âCompanies have set aside healthy budgets for the right solution and VCs sense that there is money to be spent.â
TA and Training Lead the Pack
This yearâs deals are tipping heavily toward recruiting technology firms. âTalent acquisition is a massive problem in organizations today,â said Corsello.
This year alone Jobcase, a social media recruiting platform for blue-collar workers, secured $100 Â million; Built In, a Chicago-based tech recruiting and media platform received $22 million; and AllyO, an artificial intelligence conversational recruitment platform received $45 Â million.
The current spending spree follows at least a half-decade of heady HR tech investment. Funding and deal activity hit new highs in 2015, with firms landing $2.4 billion across 383 deals. That follows similar high rates of investment in 2013 and 2014 alike.
This year, start-ups offering solutions to find and manage gig workers are also gaining a lot of attention because âno one has figured out how to manage the entire workforce yet,â Corsello said. He pointed to Jobble, Sense, and Instawork â all gig recruiting platforms that secured healthy VC deals in the past few months. âItâs a huge area of interest.â
Skill development is also a hot area as companies attempt to prepare for the âfuture of work.â The biggest deal of 2019 is Coursera, the online learning platform that offers degrees and certificates, which secured $103 million in April to push its value past $1 billion. Other learning and development companies are drawing attention and investment, though this space has been less innovative, said Mallon. âWe still need a philosophical shift in how we think about developing people before the technology can catch up.â
Thatâs not stopping VCs from investing in this space, though a lot of these deals still feel like investors throwing money at the problem to see what sticks. Mallon points to past investments in companies offering MOOCs â massive online open courses â and microlearning formats. âIt wasnât because they were so effective as learning tools,â he said. It was about trying new solutions.
And even the biggest deals shouldnât be seen as proof that this technology will be disruptive. âA lot of companies are still only tackling the easy stuff,â said Chris Havrilla, vice president of HR technology and solution provider strategy at Bersin, Deloitte Consulting LLP. Whether it is high-volume recruiting platforms or chunky content training apps, these tools may solve problems, but they arenât reinventing the workflow â âat least not yet,â she said.
Mallon believes innovations will come sooner in talent acquisition than in learning and development, and he expects VCs to continue investing across this space.
While not all of these VC investments will pay off, HR leaders shouldnât be afraid to experiment, added Corsello. He suggests earmarking 20 percent of their budget to pilot new solutions. âYou can test software at a relatively low level of risk to figure out what works for you.â
HR tech is in dire need of innovation, which is driving venture capitalists to pour big money into the space. But pure venture capital firms like Acadian Ventures and Andreessen Horowitz arenât the only ones making these deals. A number of enterprise software firms, including Salesforce, Cornerstone OnDemand, Workday and Randstad, are getting into the VC game, investing millions of dollars into promising start-ups to bolster innovation.
âMost are using it as a hedge strategy,â said Corsello. While these are still venture capital deals and not acquisitions, companies may invest in two or three start-ups with similar solutions to see which ones they may eventually want to acquire. Itâs a third alternative to the build or buy model for innovation, he said. âThey are taking an âinvest, watch and acquire laterâ approach.â
While some entrepreneurs may balk at investment from a software company, viewing it as an early stage acquisition, it can be a benefit. Corsello pointed to Workdayâs recent investment in talent acquisition firm Beamery, which generated a lot of speculation that it was a precursor to an acquisition. âSome companies donât want to be aligned with a single vendor, but that deal gave Beamery a lot of exposure to Workdayâs customers,â he said.
These deals are also good news for companies seeking new innovations to address their talent acquisition, training and employee engagement issues. âAll of this interest indicates that there is a lot of innovation happening,â Corsello said. âHR executives should be paying attention.â
Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior.
Despite its nascent nature, the ubiquity of AI applications is already transforming everyday life for the better.
Whether discussing smart assistants like Appleâs Siri or Amazonâs Alexa, applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, AI is quickly becoming an essential tool of modern life and business.
In fact, according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013.
Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.
Cementing the âintelligentâ aspect of AI, advances in technology have led to the development of machine learning to make predictions or decisions without being explicitly programmed to perform the task. With machine learning, algorithms and statistical models allow systems to âlearnâ from data, and make decisions, relying on patterns and inference instead of specific instructions.
Unfortunately, the possibility of creating machines that can think raises myriad ethical issues. From pre-existing biases used to train AI to social manipulation via newsfeed algorithms and privacy invasions via facial recognition, ethical issues are cropping up as AI continues to expand in importance and utilization. This notion highlights the need for legitimate conversation surrounding how we can responsibly build and adopt these technologies.
How Do We Keep AI-Generated Data Safe, Private and Secure?
As an increasing number of AI enabled devices are developed and utilized by consumers and enterprises around the globe, the need to keep those devices secure has never been more important. AIâs increasing capabilities and utilization dramatically increase the opportunity for nefarious uses. Consider the dangerous potential of autonomous vehicles and weapons like armed drones falling under the control of bad actors.
As a result of this peril, it has become crucial that IT departments, consumers, business leaders and the government, fully understand cybercriminal strategies that could lead to an AI-driven threat environment. If they donât, maintaining the security of these traditionally insecure devices and protecting an organizationâs digital transformation becomes a nearly impossible endeavor.
How can we ensure safety for a technology that is designed to learn how to modify its own behavior? Developers canât always determine how or why AI systems take various actions, and this will likely only grow more difficult as AI consumes more data and grows exponentially more complex.
The latest facial recognition applications can detect faces in a crowd with amazing accuracy. As such, applications for criminal identification and for determining the identity of missing people are growing in popularity. But these solutions also invoke a lot of criticism regarding legality and ethics.
People shouldnât have to worry that law enforcement officials are going to improperly investigate or arrest them because a poorly designed computer system misidentified them. Unfortunately this is becoming a reality and the consequences for inaccurate facial recognition surveillance could turn deadly.
According to a 2017 blog post, Amazonâs facial recognition system, Rekognition, uses a confidence threshold set to 85 percent and upped that recommendation to a 99 percent confidence threshold not long after, but studies from the ACLU and MIT revealed that Rekognition had significantly higher error rates in determining demographic traits of certain members of the population than purported by Amazon.
Beyond accuracy (and the lack thereof in many cases), the other significant issue facing the technology is an abuse of its implementation â the âbig brotherâ aspect.
In order to address privacy concerns, the U.S. Senate is reviewing the Commercial Facial Recognition Privacy Act, which seeks to implement legal changes that require companies to inform users before facial recognition data is acquired. This is in addition to the Biometric Information Privacy Act of Illinois, which is not specifically targeted at facial recognition but requires organizations to obtain consent to acquire biometric information, and that consent cannot be by default, it has to be given as a result of affirmative action.
As San Francisco works to ban use of the technology by local law enforcement, the divisive debate over the use â or potential misuse â of facial recognition rages on. The public needs to consider whether the use of facial recognition is about safety, surveillance and convenience or if it’s simply a way for advertisers or the government to track us. What is the government and private sector’s responsibility in using facial recognition and when is the line crossed?
How Should AI Be Used to Monitor the Public Activity of Citizen?
The future of personalized marketing and advertising is already here. AI can be combined with previous purchase behavior to tailor experiences for consumers and allow them to find what they are looking for faster. But donât forget that AI systems are created by humans, who can be biased and judgmental. By displaying information and preferences that a buyer would prefer to keep secret, while more personalized and connected to an individualâs identity, this application of AI technology could evoke sentiments surrounding privacy invasion. Additionally, this solution would require storing an incredible amount of data, which may not be feasible or ethical.
Consider the notion that companies may be misleading you into giving away rights to your data. The impact is these organizations can now detect and target the most depressed, lonely or outraged people in society. Consider the instance when Target determined that a teen girl was pregnant and started to send coupons for baby items according to her pregnancy score. Her unsuspecting father was none too pleased about his high-schooler receiving ads that, in his mind, encouraged his daughter to get pregnant â and he let the retail giant know about it.
Unfortunately, not only are businesses gathering eye-opening amounts of information â many are being racially, economically and socially selective with the data being collected. And by allowing discriminatory ads to slip through the net, companies are opening a Pandoraâs box of ethical issues.
How Far Will AI go to Improve Customer Service?
Today, AI is often employed to complement the role of human employees, freeing them up to complete the most interesting and useful tasks. Rather than focusing on the time-consuming, arduous jobs, AI now allows employees to focus on how to harness the speed, reach and efficiency of AI to work even more intelligently. AI systems can remove a significant amount of friction borne from interactions between customers and employees.
Thinking back to the advent of Googleâs advertising business model and then the launch of Amazonâs product recommendation engine and Netflixâs ubiquitous âsuggested for youâ algorithm, consumers face a dizzying number of targeted offers. Sometimes this can be really convenient when you notice that your favorite author has come out with a new book, or the next seasons of a popular show launched. Other times it comes across as incredibly invasive and seemingly in violation of basic privacy rights.
As AI becomes more prominent across the enterprise, its application is a new issue that society has never been forced to consider or manage before. While the application of AI delivers a lot of good, it can also be used to harm people in various ways, and the best way to combat ethical issues is to be very transparent. Consequently, we â as technology developers and manufacturers, marketers and people in the tech space â have a social and ethical responsibility to be open to scrutiny and consider the ethics of artificial intelligence, working to hinder the misuse and potential negative effects of these new AI technologies.
Rob Carpenter is the founder and CEO of Valyant AI, a Colorado-based artificial intelligence company focused on customer service in the quick-serve restaurant industry.
Companies using artificial intelligence to assess video interviews should be aware of a new law on the books.
In May the Illinois Legislature unanimously passed the Artificial Intelligence Video Interview Act, which requires employers to notify candidates that AI will be used to assess their interview, be able to explain what elements it will look for, and secure their consent to do it.
Those that donât could face future litigation.
The legislation, which is expected to be signed by Gov. J.B. Pritzker this summer, addresses the risk of hidden biases, explained Mark Girouard, a labor and employment attorney for Nilan Johnson Lewis in Minneapolis. âAs with any use of AI in recruiting, this law come from concerns about how observations in the interview correlate to business value.â
AI assessments of a video interview use machine-learning algorithms that are taught what to look for by studying existing data sets and finding correlations. For example, it might determine that candidates who use certain phrases, or speak at a certain speed, have the right attributes to do well in a role, based on data captured about previous high performers.
Replicating Bias
This is a valuable and efficient way to prescreen candidates, and it can potentially eliminate human bias from the process. However, if the data sets the algorithm learns from are inherently biased, the algorithm can adopt those biases perpetuating the problem, Girouard says. For example, they might identify certain word choices, facial expressions or even skin tone as a consistent theme among high performers, even though those features donât align with performance.
âIf algorithms are trained correctly they shouldnât replicate bias,â Girouard says. âBut if they arenât they can amplify disadvantage.â
Kevin Parker, CEO of Hirevue, a video interviewing software company that offers AI-driven assessment services, couldnât agree more.
âWe are in full support of this bill,â said Parker, who was invited by lawmakers to provide feedback on its content. He sees it as another way to address privacy and fairness in the recruiting process, and to set quality standards for the entire industry.
Hirevue addresses concerns about bias by including organizational psychologists on the teams that work with customers to first identify interview questions that will uncover the right criteria for success (empathy, problem solving, sociability), then to test those questions against a broad set of data to ensure they have no adverse impact.
Sometimes a problem will emerge, he noted. For example, when companies train algorithms using performance data from a predominantly middle-aged white male employee population, certain factors can introduce bias.
The testing process used to vet the interview questions can identify these biases, then the team will either eliminate the question or reduce the value of factors associated with those measures. âIn this way we can neutralize biases before a single candidate is interviewed.â
A Flood of Legislation
While this law has only been introduced in Illinois, it is likely the first of many such laws being proposed as concerns about AIâs impact on recruiting bias grows, Girouard warned. âIt is the first drip of what is likely to be a flood of legislation.â
To protect themselves against later litigation, employers should educate themselves on what the law requires, and how they are addressing the risk of AI-driven bias in their current operations. He noted that most employers today canât explain how the AI assessment works, what criteria they look for or how those criteria align with performance success.
Thatâs a problem, he said. The law doesnât just require employers to inform candidates about the technology, they also must be able to describe how the AI tool will interpret the interview and how it will be used in the selection process. âIf you canât explain it, it will be very hard for you to defend it in court.â
Vault Platform has developed an app that uses blockchain technology to allow employees to document and report workplace sexual harassment on their smartphones.
âInteresting,â you say,â but whatâs blockchain technology?â
Great question. I asked my partner, David Croft, who chairs Meyers Romanâs Blockchain & Cryptocurrency practice group. His answer: âBlockchains are decentralized databases, maintained by a distributed network of computers that rely on network effects and economic incentives to secure the network.â
In other words, blockchains are secure bits of data secured across a decentralized network of digital devices, for which the keys to unlock rely on every other block in the chain. Or, described another way (per Blockgeeks)â
A blockchain is a growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block a timestamp, and transaction data. By design, a blockchain is resistant to modification of the data. It is âan open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. âŚ
A blockchain is, in the simplest of terms, a time-stamped series of immutable record of data that is managed by cluster of computers not owned by any single entity. Each of these blocks of data (i.e. block) are secured and bound to each other using cryptographic principles (i.e. chain).
Which brings us back to Vault Platformâs sexual harassment documentation and reporting app.
The app uses blockchain technology to provide a safe space or a âvaultâ allowing workers to write reports of harassment and store any evidence, says Neta Meidav, CEO of Vault Platform. The vault itself is private, she says, but at any time workers can use the app to send that information directly to HR. âŚ
If workers decide to report harassment directly to their HR department they have two options, they can elect to individually report or they can choose to go together, Meidav says. By using go together, the platform will search for other complaints about the same individual. If others exist, then the reports will all be sent to HR together. If not, then it will be held until another employee reports that person. âŚ
âThe technology will identify if there has been past or present complaints about this person as well,â she says. âYour claim will go to HR with other people who have reported in the past.â
Blockchain has the potential to transform human resources management. Itâs being used in hiring and recruiting, paying employees and contractors, tracking time and attendance, and verifying backgrounds (among other uses).
This post is in no means an endorsement of Vault Platform. Iâve never used it and donât know of any company that has; everything I know about it is from reading its website and the few articles about it I found on the internet. That said, it is illustrative of how blockchain may, in the near future, disrupt HR.
If you are not at least investigating how blockchain technology can help you organization take its HR management to the next level and into the future, you are doing your business a disservice. Thankfully, I know a few attorneys who are at the ready to help.
If you have more than a handful of employees chances are they are using some kind of internal communication platform.
Maybe they are among the 10 million people who use Slack every day, or maybe youâve deployed Microsoft Teams, Yammer, Workplace by Facebook, or some other internal chat tool.
The key is, your employees have a place to collaborate, plan projects, brainstorm and share ideas. But are you sure that is all they are doing?
If a company has a communication culture where sexist jokes are casually exchanged, or employees think itâs OK to share client information via chat, itâs a just a matter of time before a crisis occurs. With that kind of risk simmering in the background, companies canât just assume employees are following all the data-privacy rules and social protocols when using these internal platforms.
Unless HR is paying attention, these seemingly valuable collaboration platforms can quickly become problematic, said Jeff Schumann, CEO of Aware, a provider of monitoring software for collaboration platforms.
âA large company might have thousands of different public chat groups going at any given time,â he said. Thousands more employees will be exchanging private messages with other individuals or small groups. âItâs important to know what they are saying.â
Chances are employees are sharing information or communicating in a way that HR should be worried about. Columbus, Ohio-based Awareâs âHuman Behavior Risk Analysisâ report found that 1 in 50 private messages on these platforms contains sensitive information, including passwords and client data, and 1 in 90 are ânegative in nature.â They also found that 1 in every 250 public messages â those shared with a large group â contain confidential information.
The challenge is how to monitor these conversations and respond without scaring people away. Smaller companies can mitigate these risks through human monitoring â assigning an HR person or team leader to keep track of the conversations and to address any issues that arise. But in big companies such oversight is impossible.
Instead, many firms are utilizing monitoring software with artificial intelligence and natural language processing to constantly read messages and alert HR if a problem arises. These platforms can be often customized to look for certain types of information, or conversations that might indicate a regulatory risk (sharing client data), or suggest cultural concerns, or forms of harassment.
Taking a proactive approach gives companies the information they need to prevent data breaches and to respond to bullying, racism or other negative exchanges, said Linda Pophal, founder of Strategic Communications, an employee communications consulting firm.
âIf itâs a small issue, managers can address the issue privately,â Pophal said. But if the exchange represents a bigger systemic problem or it puts the company at risk, HR should be ready to step in. In these cases, a response may involve deleting the post, reprimanding the people involved and sending out a companywide reminder about appropriate use of these chat tools.
Pophal also urged HR leaders to post a follow-up message about how the situation was resolved. âYou canât just take something down and assume no one will notice,â she said. âUse these situations as an opportunity to communicate whatâs happened, and to change the direction of the conversation.â
Pulse of the Workforce
She noted that monitoring isnât only useful to uncover communication mistakes. HR leaders can also use monitoring as a way to gauge employee sentiment. âIf something is going on at the company people are talking about it,â Pophal said. Monitoring these platforms lets you know what they are saying. Maybe they are mad about hikes in health insurance costs or confused about the new paid time off program. âHR can track these conversations and respond when necessary.â
They can also see when people are excited about a new program and to identify who are the communication influencers and who is opting out of the conversation, added Laura Hamill, chief people officer of Limeade, an employee experience software company. Hamill also is chief science officer of Limeade Institute, which researches employee well-being, engagement and other workplace issues. âMonitoring gives you a sense of whether people feel engaged,â she said.
These platforms provide employees with a virtual community that becomes inherent to the workplace culture. âMonitoring wonât solve your communication problems,â she said. But when HR pays attention to how people communicate, and sets the tone for appropriate behavior, it will ensure that everyone feels safe, included, and connected.
âItâs 2019. All of our employees have been on Facebook for years. Many are also on Twitter, and Instagram, and ⌠We donât need to do any social media training.â
If youâve had these thoughts or internal conversations, allow me to offer Exhibit 1 as to why you are wrong.
Texas district votes to fire teacher who tried to report undocumented students to Trump on Twitter.
A Texas school board unanimously voted to fire a teacher who tried to report undocumented students in her school district to President Donald Trump through a series of public tweets â that she thought were private messages to the president.
If youâre keeping score at home, the employee believed that her very public tweets were actually private conversation between her and President Trump.
I promise that you almost certainly have at least one employee who thinks that their social media posts are private.
Unless you want to be in position of having to fire that employee at some point in the future after he or she screws up by posting something offensive online (and he or she will screw up and post something offensive), do yourself a favor and schedule some social media training for your employees.
I might even know someone who can do it for you (nudge nudge, wink wink).
As the years go on, so too does the list of things to which people become addicted. Emerging front and center as a relatively new but common modern addiction â to which employers are having difficulty responding â is the concept of a digital addiction.
A digital addiction is more than a mindless but incessant checking of oneâs phone, more than browsing Facebook while taking a break from company-focused work. It is a complete disruption to and dysregulation of the daily life of an individual, due to compulsions to engage in the addictive and cyclical behaviors.
Digital Addictions and Treatment
Like other addictions, a digital addiction essentially renders an âaddictâ unable to perform a major life activity, such as sleeping, eating or working. As with other addictions, a digital addiction often arises out of feelings of discontent, stress, pressure, anxiety, depression or other underlying mental health condition. Although the behaviors themselves (use of electronic devices) may seem more benign than drugs, alcohol or sex, the personal impact is no less severe.
And perhaps even more concerning is the fact that digital addictions can be hard to spot and even harder to stop. We live in a day and age that virtually necessitates constant and unwavering digital and electronic connection. Behaviors that may be dangerous for a minority of the population with a digital addiction are entirely socially acceptable for the majority of individuals, rendering the line between an addiction and a habit blurrier than ever.
As the prevalence and understanding of digital addictions rises, so too does an understanding of the disorder and its treatment. Although this addiction is not yet recognized in the Diagnostic and Statistical Manual of Mental Disorders, or the DSM-5, treatment programs are seeing the growing need for programs specifically tailored to digital and gaming addictions. Additionally, organizations worldwide have begun conducting investigations and research into the impact of a digital addiction upon both the quality and productivity of life.
What Does This Mean for Employers?
In recent years, employers have come to understand their obligations related to mental health issues and disabilities â employees are to be granted reasonable accommodations for mental health disorders the same as they would be for a physical disorder or illness. This includes, when applicable, leave to attend treatment on an inpatient, partial hospitalization, intensive outpatient or outpatient basis under federal laws like the Family Medical Leave Act or Americans with Disabilities Act, as well as state laws, like the California Family Rights Act and Californiaâs Fair Employment and Housing Act. What then is an employerâs obligation if an employee exhibits a digital addiction?
It is prudent to accommodate an individual with a digital addiction the same way you would accommodate any other individual: engage in the interactive process and review and discuss any restrictions, limitations or accommodations that may be needed. While there may be concerns regarding an employeeâs ability to return to work in the digital age after receiving treatment for a directly related addiction, this concern cannot be used as a basis to engage in an adverse action against an employee.
This remains the case even if the disorder is not officially âdiagnosable.â In other words, an employer must take a digital addiction seriously, even if it does not understand the addiction or personally believe the addiction is legitimate.
Where Do We Go From Here?
For now, there are several best practices employers can use concerning digital addictions. An up-to-date compliant handbook with policies addressing leaves and accommodations goes a long way. A handbook creates the foundation for your policies and procedures. If your handbook is wrong, or if you do not have a handbook at all, your internal policies and procedures are much more likely to be problematic and subject to tougher scrutiny.
Your handbook also needs to be acknowledged by your employees. You can use an employeeâs acknowledgement to show they were well aware you were more than willing to reasonably accommodate them and welcomed any and all accommodation requests.
Documentation. Document notice of an employeeâs alleged disability; meetings and communications discussing the alleged disability; and requested, offered or denied accommodations. Without documentation of this interactive process, it may as well have never happened.
Train your managers and supervisors. They can make or break your defense. They typically receive notice of an alleged disability or requested accommodation first. If they fail to take this seriously and begin the interactive process, your defense can be severely undermined. They need to know what constitutes ânotice,â that the company has interactive process obligations and how to handle accommodation requests.
Do not be too quick in denying accommodations. The law requires that you participate in a âgood faithâ interactive process, which means considering each and every possible reasonable accommodation in âgood faith.â Document any legitimate reasons why an accommodation may not be âreasonable,â but understand that not everything is âunreasonable.â While employers do not have to provide accommodations that are unduly burdensome, âundue burdenâ is an extremely tough standard to meet and is looked at primarily in financial terms by courts.
Finally, stay up-to-date on changes in the law concerning digital addictions. A critical part of avoiding future claims is being aware of your ever-changing legal obligations.
Small and midsized businesses may be investing in more HR technology, but they arenât making good use of it, and thatâs a shame.
A new survey from HRIS provider BerniePortal found that while 64 percent of small and midsized businesses use HR software, few are using technology to manage the full scope of HR. âSMBs are definitely familiar with HR software, but they are not using it,â said Alex Tolbert, founder and CEO of BerniePortal.
Tolbert attributes the lag in uptake to the fact that HR leaders in smaller companies are time-strapped and over-worked. More than half of the companies surveyed have just one HR person on staff, and many of them report that HR is not their sole responsibility.
This creates a Catch-22 for tech adoption. Sole HR leaders know they can benefit from automation delivered through HR software, but they donât have the time, budget or expertise to choose products, vet vendors and deploy new applications. âThe survey tells us that HR administrators are time-challenged, and that they recognize the opportunity to streamline their workload through automation,” Tolbert said. They just need to find the time and resources to leverage them.
SMBs Spend Big
This transition does appear to be occurring. Sierra-Cedarâs 2018-19 HR systems report found that the fastest growing segment of new HR technology buyers is small businesses, with 38 percent reporting plans to spend more on HR tech in the next three years.
âBy the time a business reaches 20 to 50 employees, they are starting to see the value of core HR technology,â said George LaRocque, founder and principal HCM market analyst for LaRocque LLC in New York. His research found small companies use an average of seven to eight HR related apps at this point in their growth cycle. âItâs not hard to get to that point even in a small firm.â
Usually they start with payroll, though demand for talent is causing a shift toward talent management systems. The Sierra-Cedar survey found small businesses were more likely to increase spending in talent management applications than any other category.
âAcross industries, everyone has a talent problem,â LaRocque said. âThey are competing with each other for a limited talent pool, and they have to get creative in the way they source.â That is spurring them to adopt applicant tracking systems and recruiting apps, as well as in-house tools to engage workers and manage succession planning faster than they might have in the past. âHR is being pushed to find more innovative ways to address the talent issue,â LaRocque said. âIt is driving the adoption of more HR applications in small businesses.â
Though even if small companies are eager to adopt new tech, they are cautious about where to spend money, and how to generate the most value from limited budgets.
For very small companies just beginning the HR software journey, LaRocque encouraged them to start with core HR solutions. âYou want to get payroll, benefits, time and attendance, and paid time off in order, and there are a lot of platforms designed to help small companies do all that,â he said. âThen you can start looking at purpose-built solutions to meet your specific workforce needs.â
The HR problems a company faces will determine the kinds of tools they should deploy â but they shouldnât delay. âIf you think you donât have time for technology youâve got your head in the sand,â he said. The time savings that HR leaders achieve by automating laborious HR tasks make these tools immediately worth the investment, especially for small companies with under-staffed HR teams.
Law.com recently pronounced, “The Emojis are Coming!”Â
That article got me thinking, are they coming to workplace litigation, too? After all, emojis are a form of communication, and work is all about communication. Which would suggest that we would start seeing them in harassment and discrimination cases.
According to Bloomberg Law, mentions of emojis in federal discrimination lawsuits doubled from 2016 to 2017. Let’s not get crazy. The doubling went from six cases to 12 cases. But, a trend is a trend.
While harassment cases dominate these filings, it’s not just employees who are using đ to establish a hostile work environment. Employers are using employees’ use of emojis to respond to alleged acts of harassment (such as đ, or đ, or đ) to help establish that the alleged hostile work environment was either welcomed or subjectively not offensive.
On the flip side, consider the salacious sexual harassment lawsuit filed against celebrity chef Mike Isabella. According that lawsuit, Isabella referred to attractive female customers as “corn” after one of his chef’s commented that one woman was “so hot, [he’d] eat the corn out of her shit.” The lawsuit alleges further acts of harassment via text messages with with corn emojis đ˝.
Most employers already have an emoji policy. Itâ’s called your harassment policy. You do not need a separate policy to forbid your employees from using what is becoming an acceptable form of communication ⌠.
We can have a healthy debate over the professionalism of emoji use in business communications (like this one). Indeed, according to one recent survey, “nearly half (41%) of workers use emojis in professional communications. And among the senior managers polled, 61% said it’s fine, at least in some situations.” My sense is that your view of this issue will depend on a combination of your age, your comfort with technology, and the age of your kids.
As for me, I use emojis all the time, even at work. Email is notoriously tone deaf. It’s easier for me to drop a đ into an email to convey intent than to tone down my sarcasm.
In other words, đ. Emojis are đ, and it’s perfectly fine to â¤ď¸ them at work đ.