Cyber criminals are targeting employees who are working remotely with fraudulent termination phishing emails and invites to video teleconference meetings, according to federal authorities. As part of the phishing email or text, you might be asked to click on a link to receive more information about a severance package. If you fall for it, and click on a link, you might end up downloading malicious code onto your computer to allow the hacker to create a backdoor to access information. … One area of particular concern going forward involves fraud relating to scammers who are attempting to impersonate contact tracers who will alert you to the possibility that you were near someone who tested positive for COVID-19.
The Ethics of Artificial Intelligence in the Workplace
Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior.
Despite its nascent nature, the ubiquity of AI applications is already transforming everyday life for the better.
Whether discussing smart assistants like Apple’s Siri or Amazon’s Alexa, applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, AI is quickly becoming an essential tool of modern life and business.
In fact, according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013.
Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.
Cementing the “intelligent” aspect of AI, advances in technology have led to the development of machine learning to make predictions or decisions without being explicitly programmed to perform the task. With machine learning, algorithms and statistical models allow systems to “learn” from data, and make decisions, relying on patterns and inference instead of specific instructions.
Unfortunately, the possibility of creating machines that can think raises myriad ethical issues. From pre-existing biases used to train AI to social manipulation via newsfeed algorithms and privacy invasions via facial recognition, ethical issues are cropping up as AI continues to expand in importance and utilization. This notion highlights the need for legitimate conversation surrounding how we can responsibly build and adopt these technologies.
How Do We Keep AI-Generated Data Safe, Private and Secure?
As an increasing number of AI enabled devices are developed and utilized by consumers and enterprises around the globe, the need to keep those devices secure has never been more important. AI’s increasing capabilities and utilization dramatically increase the opportunity for nefarious uses. Consider the dangerous potential of autonomous vehicles and weapons like armed drones falling under the control of bad actors.
As a result of this peril, it has become crucial that IT departments, consumers, business leaders and the government, fully understand cybercriminal strategies that could lead to an AI-driven threat environment. If they don’t, maintaining the security of these traditionally insecure devices and protecting an organization’s digital transformation becomes a nearly impossible endeavor.
How can we ensure safety for a technology that is designed to learn how to modify its own behavior? Developers can’t always determine how or why AI systems take various actions, and this will likely only grow more difficult as AI consumes more data and grows exponentially more complex.
For example, should law enforcement be able to access information recorded by AI devices like Amazon’s Alexa? In late 2018, a New Hampshire judge ordered the tech giant to turn over two days of Amazon Echo recordings in a double murder case. However, legal protections for this type of privacy-invading software remains unclear.
How Should Facial Recognition Technology Be Used?
The latest facial recognition applications can detect faces in a crowd with amazing accuracy. As such, applications for criminal identification and for determining the identity of missing people are growing in popularity. But these solutions also invoke a lot of criticism regarding legality and ethics.
People shouldn’t have to worry that law enforcement officials are going to improperly investigate or arrest them because a poorly designed computer system misidentified them. Unfortunately this is becoming a reality and the consequences for inaccurate facial recognition surveillance could turn deadly.
According to a 2017 blog post, Amazon’s facial recognition system, Rekognition, uses a confidence threshold set to 85 percent and upped that recommendation to a 99 percent confidence threshold not long after, but studies from the ACLU and MIT revealed that Rekognition had significantly higher error rates in determining demographic traits of certain members of the population than purported by Amazon.
Beyond accuracy (and the lack thereof in many cases), the other significant issue facing the technology is an abuse of its implementation — the “big brother” aspect.
In order to address privacy concerns, the U.S. Senate is reviewing the Commercial Facial Recognition Privacy Act, which seeks to implement legal changes that require companies to inform users before facial recognition data is acquired. This is in addition to the Biometric Information Privacy Act of Illinois, which is not specifically targeted at facial recognition but requires organizations to obtain consent to acquire biometric information, and that consent cannot be by default, it has to be given as a result of affirmative action.
As San Francisco works to ban use of the technology by local law enforcement, the divisive debate over the use — or potential misuse — of facial recognition rages on. The public needs to consider whether the use of facial recognition is about safety, surveillance and convenience or if it’s simply a way for advertisers or the government to track us. What is the government and private sector’s responsibility in using facial recognition and when is the line crossed?
How Should AI Be Used to Monitor the Public Activity of Citizen?
The future of personalized marketing and advertising is already here. AI can be combined with previous purchase behavior to tailor experiences for consumers and allow them to find what they are looking for faster. But don’t forget that AI systems are created by humans, who can be biased and judgmental. By displaying information and preferences that a buyer would prefer to keep secret, while more personalized and connected to an individual’s identity, this application of AI technology could evoke sentiments surrounding privacy invasion. Additionally, this solution would require storing an incredible amount of data, which may not be feasible or ethical.
Consider the notion that companies may be misleading you into giving away rights to your data. The impact is these organizations can now detect and target the most depressed, lonely or outraged people in society. Consider the instance when Target determined that a teen girl was pregnant and started to send coupons for baby items according to her pregnancy score. Her unsuspecting father was none too pleased about his high-schooler receiving ads that, in his mind, encouraged his daughter to get pregnant — and he let the retail giant know about it.
Unfortunately, not only are businesses gathering eye-opening amounts of information — many are being racially, economically and socially selective with the data being collected. And by allowing discriminatory ads to slip through the net, companies are opening a Pandora’s box of ethical issues.
How Far Will AI go to Improve Customer Service?
Today, AI is often employed to complement the role of human employees, freeing them up to complete the most interesting and useful tasks. Rather than focusing on the time-consuming, arduous jobs, AI now allows employees to focus on how to harness the speed, reach and efficiency of AI to work even more intelligently. AI systems can remove a significant amount of friction borne from interactions between customers and employees.
Thinking back to the advent of Google’s advertising business model and then the launch of Amazon’s product recommendation engine and Netflix’s ubiquitous “suggested for you” algorithm, consumers face a dizzying number of targeted offers. Sometimes this can be really convenient when you notice that your favorite author has come out with a new book, or the next seasons of a popular show launched. Other times it comes across as incredibly invasive and seemingly in violation of basic privacy rights.
As AI becomes more prominent across the enterprise, its application is a new issue that society has never been forced to consider or manage before. While the application of AI delivers a lot of good, it can also be used to harm people in various ways, and the best way to combat ethical issues is to be very transparent. Consequently, we — as technology developers and manufacturers, marketers and people in the tech space — have a social and ethical responsibility to be open to scrutiny and consider the ethics of artificial intelligence, working to hinder the misuse and potential negative effects of these new AI technologies.
Rob Carpenter is the founder and CEO of Valyant AI, a Colorado-based artificial intelligence company focused on customer service in the quick-serve restaurant industry.
Do Employers Have a Duty to Protect Employees’ Personal Information?
Employees trust their employers with a whole bunch of personal information. Social security numbers, medical documents, insurance records, birth dates, criminal records, credit reports, family information, etc. And it’s not like employees have a choice over whether to disclose and entrust this information to their employer. These documents are all necessary if employees want to get hired, get paid, and obtain health insurance and other benefits. Thus, an employer’s personnel records are a treasure trove of PII (personally identifiable information — any data that could potentially identify a specific individual, which can be used to distinguish one person from another and de-anonymizing otherwise anonymous data).
For this reason, cyber-criminals target myriad businesses in an attempt to steal (and then sell on the dark web) this data.
Also in Legal: Biometric Privacy Lawsuits Rising
If a company is hacked, and employees’ PII or other data is stolen, is their employer liable to its employees for any damages caused by the data breach?
I’ve covered this issue twice before (here and here), with different courts reaching opposite results (albeit the majority of them concluding that an employer can be held liable).
In AFGE v. OPM (In re United States OPM Data Sec. Breach Litig.), the D.C. Circuit Court of Appeals recently addressed a similar issue, and concluded that employee-victims have standing to sue their employer following a data breach from which their personal information and data is stolen. A “substantial risk of future identity theft” is sufficient harm to give rise to a lawsuit, and the “their claimed data breach-related injuries are fairly traceable to [their employer’s] failure to secure its information systems.”
All of these cases are legally interesting, and, I submit, largely practically insignificant. Regardless of whether you, as an employer, have a legal duty to protect the personal information and data of your employees, you still have a significant financial and reputational incentive to take reasonable steps to maintain the privacy and security of the information.
Moreover, as data breaches continue to increase in quantity and quality, courts and legislatures will look for ways to shift the cost of harm to those who can both better afford it and better take measures to hedge against them. Thus, I predict that in five years or less we will have a legal consensus on liability.
The question, then, for you and your business to answer is what are you going to do about it now? The time to get your business’s cyber-house in order is now (actually, it was years ago, but let’s go with now if you’re late to the game). Don’t wait for a court to hold you liable to your employees (and others?) after a data breach.
Thus, what should you be doing?
- Implementing reasonable security measures, which includes encryption, firewalls, secure and updated passwords, and employee training on how to protect against data breaches (such as how not fall victim to phishing attacks).
- If (or more accurately when) you suffer a data breach, timely advising employees of the breach as required by all applicable state laws.
- Training employees on appropriate data security.
- Drafting policies that explain the scope of your duty as an organization to protect employee data.
- Maintaining an updated data breach response plan.
Remember, data breaches are not an if issue, but a when issue. Once you understand the fact that you will suffer a breach, you should also understand the importance of making the issue of data security a priority in your organization. The average cost to a company of a data breach in 2018 is $3.9 million (and increasing annually). While I generally don’t work in the business of guarantees, I will guarantee that any expenses you incur to mitigate the potential cost of a data breach is money well spent.
The DMV and Cybersecurity
I spent way too much of a recent Saturday morning at the local department of motor vehicles. My plates were expiring and I had forgotten to take advantage of online registration.
So there I found myself at 10 a.m. waiting in line. To be fair, it was the “express” line, designated for registration renewals only. My experience, however, was less than express, thanks to the patron two spots ahead of me.
On her turn, the clerk asked for information stored in some account on her phone. She did not, however, remember the necessary password. She then removed an inch-thick flipbook of Post-it notes, each containing a login and password to a different account.
I watched her rifle through the stack. Ten minutes of life that I will never regain, with my frustration mirrored on the faces of everyone else in line.
One of the top cybersecurity tips is to maintain proper password security. Storing passwords on a notepad or stack of sticky notes does not qualify as secure. What does?
• Using passwords with differing types of characters.
• Avoiding the most common passwords (like “Pa$Sw0rD”).
• Setting a regular schedule to change passwords (although some research shows that most people use near identical passwords when forced to switch).
Four issues warrant additional discussion.
First, do not reuse the same passwords across multiple accounts. If one account is hacked, you’ve exposed every other account for which you’ve used the same password.
Recently, for example, Intuit disclosed that its TurboTax product had suffered just such an attack. The criminal accessed TurboTax user accounts by taking usernames and passwords it had stolen from a non-Intuit source to attempt TurboTax logins.
For those with which it was successful, the criminal was able to obtain sensitive tax return information. (If you want to know if one or more of your online accounts has been compromised, check out haveibeenpwned.com.)
If you are not going to reuse the same password across multiple accounts, how will you generate and remember hundreds of different and complex passwords? The answer brings us to point number two. Use a password manager.
A password manager is an online service that stores all of your passwords (encrypted on their end). All you need to do to unlock the password for any given account is to recall the lone master password you have chosen for your password manager of choice. Passwords are also synced across devices.
The top competitors offer variations on the same service. Compare and contrast pricing, what each offers and pick one. The money you spend on an annual subscription pales exponentially to what you will spend undoing the damage caused by an account compromised by a weak password.
The question I get most often regarding password managers? “Aren’t you worried about them being hacked?”
Technically yes, but functionally no. At least one has been hacked without the exposure of even a single user password because all of the stored data is highly encrypted.
If you are comparing the security of reusing passwords or using different password but storing them in a notebook or sticky-note flipbook versus a password manager, the security choice is clear.
Third, check your URLs and only input account information on sites that use HTTPS web encryption.
HTTPS provides an encrypted online session between you and whichever site you are visiting. With a non-HTTPS site, everything you send is visible to anyone on the same network. Even safer, use a Virtual Private Network, or VPN, to create a secure channel between your computer and the internet.
Finally, use two-factor authentication for any account that offers it.
Two-factor authentication, or 2FA, requires a user to input a unique code sent to a device of choice (usually by text message) any time they log in to an account from a new device. 2FA is not foolproof.
For example, it does not take much skill for even a low-level cybercriminal to steal a phone number and intercept the code. More complexly, criminals can use social engineering to ape one’s identity and trick a mobile company to send a new SIM card to the attacker, diverting all 2FA text messages to the criminal’s mobile device.
Thus, while one should not rely on 2FA as the only method to secure one’s account, it’s added layer of security certainly can’t hurt.
No one is immune from being hacked. However, taking a few simple (albeit mildly inconvenient) steps to secure your passwords and accounts will go a long way to mitigating against this very serious and costly risk.
How to Recover a Stolen Computer From an Ex-employee in 7 Easy Steps
As many as 60 percent of employees who are laid off, fired or quit admit to stealing company data.
Sometimes they download information on their way out the door. Sometimes they email information to a personal email account. And sometimes they simply fail to return a company laptop or other device that contains the data. In the latter case, according to the Ponemon Institute, it costs an average of $50,000 for an employer to replace a stolen computer, with 80 percent of that cost coming from the recovery of sensitive, confidential and proprietary information.
When you put this data together, it becomes increasingly apparent that businesses must take proactive steps to protect their technology and data.
In light of these stats, let me suggest a seven-step plan to recover your devices and the crucial information stored on them after an employee leaves your organization.
-
- Institute a strong electronic communication and technology policy, making clear that all data and equipment belong to the company, and must immediately be forfeited upon the end of employment. Or, better yet, have employee signed an agreement affirming their obligations regarding the confidentiality of your data and confirming the obligation to return everything at the end of employment.
- Cut off an employee’s e-access to your network as soon as you have notice that an employee has departed.
- Remind employees upon termination or resignation of their absolute duty to return all data and equipment, including laptops, mobile devices and removable storage devices.
- To the extent you have the capability, and you have confidence that you have your own backups of the employee’s data, remote wipe any unreturned devices.
- If any data or equipment is missing, enlist the aid of an attorney to send a clear message that unless everything is returned immediately, the company will litigate to get it back.
- Enlist the aid of a computer forensics expert to determine if, when, and how any data was stolen, and, if so, of what that data consisted.
- Sue.
Notice that a lawsuit against the employee is step seven, not step one. In most cases, going to court is the last resort. It is expensive and time consuming.
Yet in many instances it is unavoidable. And depending on the scope of the suspected theft and the data at issue, it may quickly move up the list.
The Newest Threat to Your Cybersecurity? Lunchroom Appliances
Dinner is always a bit of cluster in my house. We are a home of two working parents, and, with music lessons and band rehearsals three nights a week, it seems that we are always scrambling for our evening meal. More often than not, we end up eating out, which is neither good for our wallets nor our waistlines.
Yet, winter is coming, which means crockpot season. The problem with some crockpot recipes, however, is that they cook for far fewer than the 10-plus hours we are out of the house every day. Wouldn’t it be great if there was a way to connect your slow-cooker to your WiFi network and control it via an app from your phone? That way, I could start the meal at 2 p.m. and not not worry about coming home to a tarry, burnt mess of chicken and sauce (yes, this has happened, and, yes, we ate out that night).
“Today’s your lucky day,” you say. “Behold, the Wifi-Enabled Slow Cooker. There’s just one drawback. Cyber criminals can seize control of it to take down websites and access your smartphones and home networks.” Yikes!
I’ll let Vice explain:
If you have an internet-connected home appliance, such as a crock-pot, a lightbulb, or a coffee maker, you can control it from the comfort of your smartphone. But a bug in the Android app that controls some of those devices made by a popular manufacturer also allowed hackers to steal all your cellphone photos and even track your movements.
Security researchers found that the Android app for internet-connected gizmos made by Belkin had a critical bug that let anyone who was on the same network hack the app and get access to the user’s cellphone. This gave them a chance to download all photos and track the user’s position … .
This problem is not small or inconsequential. The White House is even paying attention. Just yesterday, it issued sweeping guidelines for IoT (Internet of Things) Cybersecurity [pdf]. The paper calls for an engineering-based approach that bakes security systems directly into Internet of Things devices and technology.
If you have smart appliances in your workplace, the Wall Street Journal recommends the following best practices:
- Research before purchasing your smart home products. Consumers need to research the security protocols that their connected devices follow, and pay attention to how device makers issue security updates for devices’ software.
- Update the firmware of your devices. The WSJ recommends regularly updating devices, even new ones, as security updates could be released or change on a daily basis.
- Change the password for your smart home devices. Most hackers attempt to obtain a universal password for users so they can hack into all of the connected devices in the home.
- Secure your router. This means updating your firmware more frequently or simply setting your router to the WPA2 security setting, which can help a great deal.
- Create a separate network for your devices. By setting up a separate router and network for smart home devices, users can prevent them from being hacked by PCs.
- Point connected cameras in the right direction. Your connected cameras can be among the most easily hackable devices. Because of this, consumers should not have connected cameras pointed in the direction of their bedrooms, living rooms, or other very personal areas of the home.
- Ask your service provider about device security. They are the ones that should know all of the security precautions that users of their devices should be taking.
- Buy new devices, especially if your connected devices are older models.
What Exactly is Information Technology (IT)
Information technology is the study, design, development, implementation, support or management of computer-based information systems—particularly software applications and computer hardware. IT workers help ensure that computers work well for people.
Nearly every company, from a software design firm, to the biggest manufacturer, to the smallest “mom & pop” store, needs information technology workers to keep their businesses running smoothly, according to industry experts.
Most information technology jobs fall into four broad categories: computer scientists, computer engineers, systems analysts and computer programmers. HR managers responsible for recruiting IT employees increasingly must become familiar with the function and titles of the myriad job titles in demand today.
Some of them are listed below:
- Database administration associate
- Information systems operator/analyst
- Interactive digital media specialist
- Network specialist
- Programmer/analyst
- Software engineer
- Technical support representative
SOURCE: “Building A Foundation for Tomorrow, Skill Standards for Information Technology.” NorthWest Center for Emerging Technologies, Bellevue Community College, Bellevue, Washington.
Workforce, July 1998, Vol. 77, No. 7, p. 53.
Looking to work in IT? Consider joining the workforce.com development team as we build the future of workforce technology. Email us on hr@workforce.com and mention this article.