Skip to content

Workforce

Tag: bias

Posted on April 14, 2020June 29, 2023

Regulating recruiting amid constant technological innovations

recruiting, hiring, interviewing a candidate

As recruiters adopt advanced technologies in their quest to identify, court and hire candidates, attorneys are looking into the legal and regulatory issues those new tools may bring into play.

Lawyers, recruiting experts and technology vendors say legal teams are examining compliance concerns even as their colleagues in HR and IT evaluate products that leverage artificial intelligence, machine learning and other innovative approaches. Not only are they exploring the ramifications of privacy requirements such as Europe’s GDPR, they’re considering the possible impact of biases that may be inherent in a data set or unwittingly applied by algorithms.

recruiting, hiring, talent acquisition “I think we’re at the beginning of sorting out what all this means, but I think it’s definitely something people are thinking about,” said Jeffrey Bosley, San Francisco-based partner in the labor and employment practice of law firm Davis Wright Tremaine. “It’s a new technology and it’s evolving. Whenever you have a new technology, you do have growing pains and you do have these issues that come up,” he said.

Advanced technologies have gotten much attention recently, particularly as people inside and outside the business world consider the impact AI may have on jobs and livelihoods. At the same time, some well-intentioned efforts have generated media coverage for results that were diametrically opposed to what their developers set out to do.

In 2018, for example, Amazon abandoned an effort to build a machine-learning tool for recruiters after the system proved to be favoring men over women. According to Reuters, the tool downgraded resumes that included the word “women’s” as well as the graduates of two all-women’s colleges.

Also read: Is there room for an ethics code for tech companies?

Sources inside Amazon said the system, which had been under development since 2014, was meant to review resumes so recruiters could spend more time building candidate relationships and actually hiring people. It worked by comparing applicants against patterns found among resumes the company had received over a 10-year period. However, it didn’t account for the dominance of men in the technology workforce. As a result, the system machine-taught itself that male candidates were stronger than females.

Advanced technology “is at an awkward stage where it’s not really intelligent,” said William Tincup, president of the industry website RecruitingDaily.com. While he sees great potential for AI and other tools to streamline the work of recruiters and even address bias in the hiring process, he believes systems are limited in how much they can accomplish.

Why? In a word, people. “What are machines learning from their learning from humans?” Tincup asked. Hiring managers can’t help but operate with a number of possible preconceptions in their minds, from unconscious bias about race or gender to a preference for the candidate they most recently interviewed or who seems the most like themselves. Such biases, Tincup observed, live on in the makeup of a company’s existing workforce. And that leads to the troubles Amazon faced, where the data set reflects decisions made in the past more than it positions a process to understand needs of the future.

Technology Races Ahead

The situation is complicated by the idea that technology has outpaced legal and business practices. While they believe that will eventually change, analysts and technology vendors don’t see it changing quickly. 

“Right now, technology’s moving super-fast,” said Ankit Somani, co-founder of the talent acquisition and management platform AllyO, headquartered in Palo Alto, California. “Generally, regulators and the folks who control compliance standards don’t move so quickly. But, honestly, we’re like three lawsuits away from somebody taking it very seriously.”

Also read: Artificial intelligence is a double-edged sword. Here’s how HR leaders can properly wield it

 “Therein lies a real big rub,” Tincup said of regulation’s lag behind talent acquisition and HR practices. Nearly all of the processes involved with turning candidates into employees touch some kind of employment law or EEOC-related issues, but “all of those rules are outdated,” he said. “We’ve been working outside of the rules for 15 or 20 years. I would argue that there isn’t a company in the United States that’s 100 percent compliant from sourcing to outplacement.”

Talent acquisition teams, and HR in general, understand that and are beginning to adopt, said Brian Delle Donne, president of Talent Tech Labs, an industry analyst and consulting firm based in New York. However, he believes determining exactly how and where compliance fits in with the use of new technologies has been complicated by the way “artificial intelligence” has been “grossly generalized” in industry conversations.

“Most of the time they’re talking about machine learning, or sometimes just automated workflow processing,” Delle Donne said. “When you get into true artificial intelligence, where the machine is making decisions, it’s a higher threshold that’s required for our concern about the accuracy of [its] recommendations and predictions.” The distinction between true AI and what might be called “advanced technology” is important, he believes, because people assume that the machine is prescient when it’s usually not. “In most cases, it will be quite a while until machines are actually making decisions on their own,” Delle Donne observed.

Even in today’s state, the use of advanced technology has become widespread enough to raise concerns about whether it might, inadvertently, nudge an employer out of compliance. For example, AI-driven tools may use personal information in unplanned ways that a candidate hasn’t given permission for. That would raise privacy concerns. Or, tools might present results that, intentionally or not, run afoul of fair-employment legislation. “On both fronts, you’re talking about compliance statutory norms,” said Delle Donne.

AI’s Behavior

Such concerns, along with widespread speculation about AI’s impact, has made advanced technology “front of mind for many people,” said Bosley. In response, governments at all levels have begun generating “a patchwork” of laws that sometimes conflict with one another.

For example, Illinois’s Artificial Intelligence Video Interview Act went into effect Jan. 1, 2020. The law sets out transparency and consent requirements for video interviews, as well as limits on who can view the interviews and how long they can be stored. However, Bosley said, the law’s mandate to destroy videos within 30 days may conflict with the preservation requirements of other state and federal laws, including in the Civil Rights Act of 1964 and the Americans with Disabilities Act.

Also read: How Will Staney continues to change the talent acquisition game

“It puts employers in a position where they’re really going to need to assess risk,” Bosley said. “They’re going to need to come up with creative solutions to try and work around some of this risk.” 

Not all employers may feel exposed in the near term, Tincup suggested. He estimates that each year only a handful of legal actions are taken because of a candidate’s unhappiness with the recruiting process. People practices, technology practices and civil and social discourse are “way ahead of employment law,” he explained. “So is this something that’s going to create an immense amount of risk? No.” Employers today, he believes, put themselves at more risk by hiring a salesperson with a history of sexual harassment. In that regard, “you could spend more money in risk mitigation … than in recruitment technology,” he said.

At the same time, an organization’s risk may be based on activities that aren’t related to recruiting or the workforce, Bosley points out. “This isn’t just a human resources issue anymore. It’s not only an employment law issue anymore. It’s much broader than that,” he said. “You have data protection, data compliance, privacy and the potential for disparate impact claims as opposed to disparate treatment claims.”

Bosley anticipates more claims will be filed that look into a database’s contents, what data’s being looked at, how it’s being processed and whether algorithms are static or refined over time. Essentially, these claims will examine how advanced technology is making its decisions. “It’s going to be something where human resources leaders are looking to involve others in the organization and make sure that they’re both issue-spotting and getting ahead of some of these compliance issues,” he said.

 Indeed, Somani believes this notion of “explainability” — laying out what a system does and how it’s doing it — will become more important in the realms of recruiting technology and compliance. “There should, in my mind, be more compliance standards around that,” he said.

Evolving Standards

Even at a basic level, compliance standards for using technology in recruiting “don’t exist,” Somani said. For example, does texting about a job opportunity constitute a form of marketing? Is such a text permissible if it’s personalized? Because the answer’s not clear, he believes many companies are putting stricter guidelines in place.

Somani also said legal departments are becoming more involved in the purchase and implementation of recruiting technology. For tools handling communications, such as those that facilitate SMS messaging between recruiters and candidates, they’re trying to anticipate issues by creating policies that cover not only privacy, but data collection and permissions. “It’s an explicit ask in almost every deal we go into: ‘If a consumer doesn’t want to interact with your system, how do you follow that?’ ” he said. When it comes to issues related to AI’s under-the-hood work, vendors focus on transparency and disclosure by presenting disclaimers on their product or within their privacy policies.  

 For enterprises, compliance issues “can be a deal-breaker,” said Megan Gimbar, the Holmdel, New Jersey-based product marketing manager for iCIMS Hiring Suite, at least at the corporate level. While compliance and consistency are important components of her product, she said, talent acquisition teams often shy away from the topic.

In the past, employers tried to ensure compliance through training. Their approach, said Delle Donne, was to make hiring managers aware of interview questions that shouldn’t be asked (such as inquiring whether a woman intended to have children) or information that shouldn’t be considered (the candidate’s age or ZIP code). “That’s a fairly low bar,” he observed.

The bar began getting higher “once we started saying algorithms are going to make that determination for us,” Delle Donne continued. “Algorithms might actually do a better job, [or] may actually be set up in a way that they might do a better job, than humans do at avoiding compliance issues through bias.” However, he said, that requires planning and a focus on non-discrimination features when algorithms are designed.

Also read: The ethics of AI in the workplace

Compliance Further Afield

The compliance issues raised by using AI in recruiting aren’t limited to talent acquisition alone. For one thing, Somandi notes, recruiters today leverage a variety of tools that were introduced into other functions. 

Think of how candidate management systems and customer management systems align. When using those technologies, compliance may involve adapting the standards used by marketing or sales so they can be applied to talent acquisition and HR.

That road goes both ways. Even solutions designed for recruiters raise issues that aren’t unique to hiring, Delle Donne said. “As HR tries to digitize, there are many, many places where technology can streamline processes and save time and perhaps be more beneficial to the employee or the party,” he said. Many, if not all, of those will lead to some kind of compliance question. For example, a bot used in benefits administration may build a profile of confidential medical information. Or, a learning program might enter performance scores into an employee record without informing the employee. That could be a problem if those scores impact a person’s future promotions or career path.

As it digitizes, the tools implemented by HR “will bring in these technologies and there’s going to have to be some focus or some attention given to not inadvertently creating bias or discrimination, or revealing private information,” Delle Donne said. “If you take a step back, it just could be like whack-a-mole. I mean, ‘Hey, we see it over here in talent acquisition. Let’s go chase that down and… Oh, wait. We just saw this going on over there.’”

Scheduling employees is one major HR task for which technology can help. Make more accurate, data-driven scheduling decisions in just a few clicks with Workforce.com’s comprehensive scheduling software.

Posted on July 24, 2019August 3, 2023

Employers Find Strength in Diversity

diversity
Amy Cappellanti-Wolf, chief human resource officer for global cybersecurity and defense company Symantec
Amy Cappellanti-Wolf

Amy Cappellanti-Wolf is the chief human resource officer for global cybersecurity and defense company Symantec. Cappellanti-Wolf has extensive experience in the consumer and tech sectors, having worked companies such as Pepsi, Disney and Cisco Systems. Cappellanti-Wolf spoke with Workforce Editorial Associate Bethany Tomasian on diversity as a driving force for a successful business operations model.

Workforce: How does diversity fit into Symantec’s business strategy?

Amy Cappellanti-Wolf: I believe that diversity is an important business driver. Symantec is located in more than 42 countries around the world, and if you’re going to be a global company you need to have an employee population that reflects the different geographies of your customers. You need different ways to operate and go to market and you aren’t going to be able to do that with a homogeneous employee base. You need people that bring different perspectives and experiences into the business. Diversity is a critical enabler for the business to be successful.

Workforce: Can you describe Symantec’s initiatives to overcome diversity barriers regarding women and minorities?

Cappellanti-Wolf: Our first part of our three-pillar approach is centered around amplifying the work that we do and creating a platform for it. That affects our employees and potential employees in the marketplace, as well as our customers and partners. Our CEO Greg Clark signed the diversity pledge for CEOs to show that we are committed to creating a diverse work environment. The second pillar is about taking bias out of the system. We did this when we introduced Textio to our system. Textio allows you to look at job descriptions and ensure that the language is gender-neutralized. We don’t want words of phrases that might not be attractive to a diverse set of candidates. That change allows everyone a level playing field when they look at these jobs. The third pillar surrounds inclusive leadership and that starts at the top. You need a company where people have a voice and they know that what they say counts because different voices bring different solutions. Diversity is an outcome of good inclusion practices.

Also watch Cappellanti-Wolf talk about enterprise transformation at the 2019 Unleash conference in Las Vegas 

Workforce: What advice would you offer other companies and startups regarding HR?

Cappellanti-Wolf: I would tell them to start with diversity now. Be clear about the three to four things that you want to do and focus on those, rather than launching 1,000 different ships. Leadership teams have a responsibility as officers of the company to drive this type of strategy so that it becomes a way by which you do business. I would tell startups to plant the mindset early by not to hiring the likely suspects: friends from college or previous colleagues. Bring different perspectives into the room. If you start at the beginning, it will become the operating model of business as it grows.

Other Workforce Q&As: 

Gary Pisano on How Managerial Leadership Drives Innovation

Author Jeffrey Pfeffer Addresses Dying for a Paycheck — Literally

 

Posted on July 1, 2019June 27, 2019

Could Video Interviewing Land You in Court?

HR tech, spy, monitor

Companies using artificial intelligence to assess video interviews should be aware of a new law on the books.

In May the Illinois Legislature unanimously passed the Artificial Intelligence Video Interview Act, which requires employers to notify candidates that AI will be used to assess their interview, be able to explain what elements it will look for, and secure their consent to do it.

Those that don’t could face future litigation.

The legislation, which is expected to be signed by Gov. J.B. Pritzker this summer, addresses the risk of hidden biases, explained Mark Girouard, a labor and employment attorney for Nilan Johnson Lewis in Minneapolis. “As with any use of AI in recruiting, this law come from concerns about how observations in the interview correlate to business value.”

Also read: Monitor Responsibly: How Employers Are Using Workplace Surveillance Devices

AI assessments of a video interview use machine-learning algorithms that are taught what to look for by studying existing data sets and finding correlations. For example, it might determine that candidates who use certain phrases, or speak at a certain speed, have the right attributes to do well in a role, based on data captured about previous high performers.

Replicating Bias

This is a valuable and efficient way to prescreen candidates, and it can potentially eliminate human bias from the process. However, if the data sets the algorithm learns from are inherently biased, the algorithm can adopt those biases perpetuating the problem, Girouard says. For example, they might identify certain word choices, facial expressions or even skin tone as a consistent theme among high performers, even though those features don’t align with performance.

“If algorithms are trained correctly they shouldn’t replicate bias,” Girouard says. “But if they aren’t they can amplify disadvantage.”

Kevin Parker, CEO of Hirevue, a video interviewing software company that offers AI-driven assessment services, couldn’t agree more.

“We are in full support of this bill,” said Parker, who was invited by lawmakers to provide feedback on its content. He sees it as another way to address privacy and fairness in the recruiting process, and to set quality standards for the entire industry.

Hirevue addresses concerns about bias by including organizational psychologists on the teams that work with customers to first identify interview questions that will uncover the right criteria for success (empathy, problem solving, sociability), then to test those questions against a broad set of data to ensure they have no adverse impact.

Sometimes a problem will emerge, he noted. For example, when companies train algorithms using performance data from a predominantly middle-aged white male employee population, certain factors can introduce bias.

The testing process used to vet the interview questions can identify these biases, then the team will either eliminate the question or reduce the value of factors associated with those measures. “In this way we can neutralize biases before a single candidate is interviewed.”

A Flood of Legislation

While this law has only been introduced in Illinois, it is likely the first of many such laws being proposed as concerns about AI’s impact on recruiting bias grows, Girouard warned. “It is the first drip of what is likely to be a flood of legislation.”

Also watch: Armen Berjikly on Communication Advances in AI

To protect themselves against later litigation, employers should educate themselves on what the law requires, and how they are addressing the risk of AI-driven bias in their current operations. He noted that most employers today can’t explain how the AI assessment works, what criteria they look for or how those criteria align with performance success.

That’s a problem, he said. The law doesn’t just require employers to inform candidates about the technology, they also must be able to describe how the AI tool will interpret the interview and how it will be used in the selection process. “If you can’t explain it, it will be very hard for you to defend it in court.”

Posted on April 22, 2015October 18, 2024

Unconscious Bias Training Is Anti-Racism Training

WF_WebSite_BlogHeaders-12Unconscious bias training is in. It’s neat; it incorporates lots of cool new science. It’s sexy; it incorporates lots of cool images and eye opening exercises. It’s trendy; all the cool kids are doing it. And it’s safe; no one talks about racism.

That’s where the legitimate criticism comes in. As with other “in” diversity topics of the past, some raise concern that unconscious bias training won’t make a difference. It’s another fad that doesn’t address real issues or lead to meaningful change. Straight white men will go to these workshops, learn everyone’s biased (“See! It’s not just me; they’re biased too!”), learn it’s unconscious (“See! It’s not my fault; it’s unconscious!”) and change nothing.

I agree that there is this potential, as well as the danger of seeing little-to-no return on investment for the millions of dollars spent on such training. This degrades the reputation of diversity and inclusion as nice-to-have window dressing instead of the results-driven, value-add, must-have that it is — or should be.

There’s another possibility: use unconscious bias training to shift inequitable power dynamics along lines of race and other identity differences. This requires courage, clarity, leadership and the inclusion of the following seven elements:

1. Always make the business and results case for diversity and inclusion up front.This provides essential context and increases training participant interest and buy in. Research by scholars like Scott E. Page, James Surowiecki and Nancy Adler have shown the superior results created by diverse groups compared with individuals and nondiverse groups, but only if there is inclusion and effective management of diversity.

2. Encourage curiosity and critical thinking about common collective biases. The research on implicit, or unconscious, bias shows clear tendencies. Biases aren’t random or equally distributed among groups. Overwhelmingly, more people hold more negative unconscious biases about people of color; women; lesbian, gay, bisexual and transgender people; and people with disabilities than they do about white people, men, straight and “able bodied” folks. Also, being a member of a group doesn’t inoculate someone against carrying negative unconscious bias toward their own group. Many African Americans carry negative unconscious biases toward African Americans, women toward women, and so on.

  • This is about racism. Racism is not individual acts of meanness toward someone who looks different — that’s prejudice. Racism is about ways we collectively assign value, make assumptions and distribute resources inequitably along racial lines defined by physical traits. This process is driven by unconscious biases — databases in our reptilian brain that come from centuries of inherited messages about members of other racial groups as well as current messages our brains capture from our environment and catalog outside our awareness. If left unconscious and unchecked, these biases express in our decisions and behavior, which have disproportionately negative effects on people of color. The word “racism” need never be uttered to facilitate curiosity and awareness of this important feature of unconscious bias.

3. Address the inequitable effect of negative and positive biases on members of different groups. Anyone can find themselves on the receiving end of meanness or prejudice. But not everyone finds themselves getting the short end of the unconscious bias (racism) stick. Our unconscious biases and the resulting behaviors don’t affect others equitably. The multiple positive biases toward whites serve them way more than any positive biases toward people of color. The multiple negative biases toward people of color harm them way more than the few negative biases toward white people.

  • Try this for yourself: Make lists for each of those four categories. Notice how easy or difficult that was for each, and how long each list is. Notice how much or how little the qualities on each of those lists affect the material quality of life for the majority in that category.
  • This is also about racism. Left unconscious and unchecked, our negative unconscious biases have disproportionately negative impacts on people of color. The word “racism” need not be stated to make this important point. This discussion may take place during exploration of “insider-outsider” groups, which form along all aspects of human difference.

4. Allow participants to feel some degree of unease. Guilt is healthy, but shame is not. Guilt — highlighting a gap between a person’s intent and impact, between their values and behavior — can be a powerful motivator for change. It’s powerful and generative as long as they stay out of shame — feeling like a bad or wrong person for having the gap.

5. Focus on behavior, not thoughts.It’s not effective to tell people to constantly monitor their minds for biased thoughts, or imply this is the way to go. Such a message increases anxiety, guilt and a sense of powerlessness that doesn’t lead to creativity or more effective behavior. It’s also neither possible nor effective to focus on thought policing — it’s exhausting, and there are always mental processes operating outside our awareness. Instead, focus training participants on noticing their thoughts (with humor, curiosity and compassion), then disrupting their behavior by slowing down and choosing actions more deliberately. Unconscious bias only harms others or gets in the way of results when it translates into an action that has an inequitable or ineffective outcome — thoughts alone are relatively harmless.

6. Encourage responsibility and commitment to concrete actions.Learning about unconscious bias does not, and should not, let people off the hook — especially those who benefit more from positive biases and are harmed less by negative ones. Any unconscious bias training should include a discussion of the handful of research-based methods to reduce unconscious biases — total elimination of unconscious bias is neither possible nor desirable) and mitigate their undesirable effects. Training should also help participants identify specific effective behaviors and commit to implementation.

7. Follow up. Follow up. Follow up.Behavior change doesn’t come automatically after a workshop. Change is challenging and requires focused attention, opportunity and time to form and practice new habits, a culture that supports and reinforces the change, and accountability.

Racism — both our past history and current reality — shows up in our deep, collective unconscious biases. Overwhelmingly, these unconscious biases enhance white people and diminish people of color. They then express in our decisions and behaviors, reinforcing them in our brains. Disrupting such actions and putting systems in place to correct for our biases — without getting caught up in shame, guilt or silence — will, over time, allow for more diversity, inclusion and equity in the world and workplaces.

As diversity, inclusion and equity increase, our collective brain database about who belongs where and who has worth will shift. This will reduce our brains’ tendency to make snap decisions about other humans based on limited data that is inaccurate and inequitable.

Posted on March 20, 2014June 29, 2023

5 Myths About Unconscious Bias — And 6 Ways to Reduce It

WF_WebSite_BlogHeaders-12There’s no denying it, unconscious bias is trendy. It’s so trendy, it’s even become an acronym in some of my circles, known affectionately as “UB.” But as often occurs when a term or concept becomes common or mainstream, myths and misinformation abound:

Myth 1: We don’t need to worry anymore about conscious bias or bigotry. We are not “post-racial.” Individual acts of verbal, physical and emotional violence against people due to their real or perceived group membership are still relatively common. One of my least favorite statistics is that the number of active hate groups in the U.S. has increased by 56 percent — to over 900 — since 2000, particularly since President Obama took office in 2008.

Myth 2: I don’t have any unconscious biases. It’s frightening to think we may not be 100 percent aware or in control of what we think and do. But brain science shows that if you’re a human being, your brain operates through biases. Homo sapiens evolved to constantly and unconsciously make immediate decisions based on limited data and pre-existing patterns. We are descended from the more skittish members of our species, so we’re hypersensitive to anything the old parts of our brain deems dangerous. Biases have thus served us for eons, and continue to do so, but are not effective in helping us interact effectively with diverse humans in today’s workplace. Bias elimination is not only ineffective, it’s impossible — the focus should be on bias reduction (see myth 5), choosing behaviors more mindfully, and mitigating any negative impacts of those behaviors. Check out “Blind Spot: Hidden Biases of Good People” for a fascinating read.

Myth 3: I know what my unconscious biases are. By definition, UB is — well — unconscious. You may have a sense of what some are, but be blind to others. Consider taking one or more of the well-researched Implicit Association Tests. Keep in mind that our UB can often conflict with our conscious beliefs and values, and we may even hold negative UB against our own group! I’ve been doing some form of intercultural or diversity work for almost 25 years and many of my early role models were African-American women, and yet I showed a negative bias toward African-American men on one of the tests. Rather than deny our UB, we can be curious about where they come from and how they get so ingrained in our minds despite our good intentions and be more mindful of our actions. UB only become problematic when they manifest in ineffective behaviors.

Myth 4: Hooray! Since everyone’s biased, we can move on from that tired conversation about racism/sexism, etc.! Although everyone’s biased, biases are not equal in their impact at a group level. Negative UB held by a numerical majority or power-dominant group have a disproportionate ability to do harm to numerical minorities or power non-dominant groups.

Myth 5: Since UB is unconscious, there’s nothing I can do about it. Excellent suggestions abound about how to mitigate the effect of negative UB in talent management and hiring practices through awareness, calibration and effective behaviors. However, there seem to be few evidence-based strategies to reduce harmful negative biases in the first place other than these:

  • Awareness of what our particular unconscious biases are (Pope, Price & Wolfers, 2014).
  • Empathy, particularly “perspective taking,” or the ability to feel or imagine what another person feels or might feel (Todd, Bodenhausen, Richeson & Galinsky, 2011).
  • Exposure to counter stereotypical role models. (Dasgupta & Asgari, 2004 and three other studies).
  • Exposure to positive images to counteract negative bias (Dasgupta & Greenwald, 2001).
  • Using imagery to imagine alternatives to negative stereotypes  (Blair, Ma, & Lenton, 2001).
  • Training to improve one’s ability to distinguish between faces of individuals in “other” racial groups (Lebrecht, Pierece, Tarr, & Tanaka, 2009, January).

 

What will you put in motion today to reduce the negative impacts of your unconscious biases?


 

Webinars

 

White Papers

 

 
  • Topics

    • Benefits
    • Compensation
    • HR Administration
    • Legal
    • Recruitment
    • Staffing Management
    • Training
    • Technology
    • Workplace Culture
  • Resources

    • Subscribe
    • Current Issue
    • Email Sign Up
    • Contribute
    • Research
    • Awards
    • White Papers
  • Events

    • Upcoming Events
    • Webinars
    • Spotlight Webinars
    • Speakers Bureau
    • Custom Events
  • Follow Us

    • LinkedIn
    • Twitter
    • Facebook
    • YouTube
    • RSS
  • Advertise

    • Editorial Calendar
    • Media Kit
    • Contact a Strategy Consultant
    • Vendor Directory
  • About Us

    • Our Company
    • Our Team
    • Press
    • Contact Us
    • Privacy Policy
    • Terms Of Use
Proudly powered by WordPress