Data breaches caused by insiders can happen to a company of any size and in any industry. According to the 2019 Verizon Data Breach Investigations Report, 34% of data breaches in 2018 involved internal actors. These breaches are harder to detect than attacks perpetrated by outsiders because insiders act normally most of the time.
In this article, we take a look at five real-life cases of data breaches caused by insiders to show this threat from different angles. We also consider the best practices for preventing insider threats and dealing with their consequences should a data breach occur.
Terms we use
Data breaches are a cybersecurity problem many organizations face today.
Definition of a data breach
A data breach happens when sensitive information is intentionally or unintentionally released to an untrusted environment.
In general, there are two common causes of data breaches: outsider attacks and insider attacks.
In this article, we will analyze insider threats.
An insider threat refers to the risk that an employee misuses or abuses their authorized access and either damages or steals sensitive data to use it for malicious purposes.
Who are insiders?
You will see from the cases we’ve selected that insiders are not only present and former employees.
They can also be third-party contractors and business partners, technology or service providers, and physical goods suppliers. All these people also have legitimate access to your company’s information. They also know more than outsiders about your security practices, infrastructure, and vulnerabilities.
Some employees are just careless or negligent; others may steal your information for personal gain or because they have a grudge. They can be agents of competitors, or moles who spy on your company. A third party can also compromise your data security for various reasons.
Possible consequences of data breaches:
The cost of a data breach caused by an insider can be huge, and will increase until it’s detected. Here are other consequences of data breaches:
- Your company will definitely get fined according to applicable data security regulations, even if nobody seems to have been hurt.
- The price of your company’s stock may fall considerably because of the ruined trust in your organization.
- Your competitors may become aware of your company’s trade secrets and use them to develop new products or enhance existing ones, robbing you of potential profit.
- Your customers may suffer if someone compromises their sensitive information.
5 real-life data breaches and their consequences
We’ve selected five real-life cases of insider threat-caused breaches. Some are famous; others didn’t get that much press coverage. Some caused financial disasters for company owners, while others visibly targeted an organization’s reputation. Some even seem to conceal a darker purpose. But all of them are illustrative of what a data breach can do to a company and why it can happen.
1. Waymo and its former employee
Anthony Levandowski was a lead engineer at Waymo, Google’s self-driving car project. In 2016, he left to found a startup called Otto that developed self-driving trucks. Otto was acquired by Uber in several months, and Levandowski was put in charge of Uber’s self-driving department. It wasn’t a sudden move, but rather a thoroughly planned set of actions.
Levandowski stole from Google and provided to Uber the following trade secrets:
- Diagrams and drawings related to simulations, radar technology, and Light Identification Detection and Ranging (LIDAR)
- Source code snippets
- PDFs marked as confidential
- Videos of test drives
- Marketing information
At some point, Levandowski became unhappy at Google and invited colleagues to talk outside the office, recruiting them for his new company. It appears that Levandowski must have started discussing the possibility of joining Uber in 2015, about five months before his resignation from Google.
But only when Uber was ready to acquire Otto did Google executives discover that a month before Levandowski’s resignation, he had plugged his laptop into a server where Google intellectual property was stored, downloaded about 14,000 files, then copied them to an external drive and deleted everything so no traces were left.
How did the data breach become possible?
It seems that the security team wasn’t monitoring employees with privileged access. Levandowski downloaded files on his work-issued laptop, and nobody detected it until an investigation was carried out.
Another security best practice is to conduct a review of the online activity of an employee 30 days before their termination as well as before and after the date of their resignation notice. But this was not done either.
How did the breach affect the company?
Between 2009 and 2015, $1.1 billion was spent on the Waymo project to develop the technology that was stolen. Luckily, Waymo, which became a standalone Alphabet subsidiary in 2016, was able to prove the theft of trade secrets and get compensation ($245 million worth of Uber shares). They also entered into an agreement that Uber wouldn’t use the stolen information for their software and hardware.
2. Allen & Hoshall and its stolen secrets
In this insider threat case, there were no billion-dollar revenues or losses and no breathtakingly innovative technologies. However, this case can give you more insights into what problems even smaller market players can encounter.
Jason Needham, who left the Allen & Hoshall engineering firm in 2013, founded HNA Engineering. But he stayed in touch with his previous employer.
Needham repeatedly accessed the company’s email accounts and file sharing network to download project proposals, financial documents, and engineering schematics (overall, about 82 AutoCAD and 100 PDF files). A&H failed to notice this activity, and maybe it would have kept going on like that until now.
But in 2016, a prospective customer received a proposal from HNA Engineering that was suspiciously similar to that of A&H. Allen & Hoshall contacted the FBI, and they traced it back to the intruder. As a result, Needham was charged with a felony in April 2017.
How did the data breach happen?
Allen & Hoshall used the principle of least privilege. Only those employees who needed access to particular data could access it.
The company also had a proper termination procedure: Needham’s account credentials and system access were terminated when he left. However, Needham was able to access an email account of a former colleague.
The log files of the compromised email allowed the FBI to trace back the IP addresses to Needham’s office, home, and cell phone. The FBI also detected Needham’s failed attempts to hack the accounts of three other former co-workers.
This malicious insider was not only viewing sensitive financial information but also receiving rotating FTP credentials via the email account of his former colleague, who wasn’t aware his account had been hacked.
User and Entity Behavior Analytics could have been an effective solution to detect this kind of unusual account behavior, but A&H wasn’t using this kind of software.
They also didn’t have multi-factor authentication set up, which would have made it impossible to use the account without a corporate device to verify access.
How did it affect the company?
The court found that the value of stolen business information was between $250,000 and $550,000. Needham plead guilty and was obliged to pay $172,393.71 in restitution to Allen & Hoshall.
3. Anthem and its third-party vendor
In 2017, the second-largest health insurance company in the USA suffered from an insider threat in the form of a third-party vendor with weak security.
LaunchPoint, an insurance coordination services vendor, reported a data breach in April 2017. One of its employees had emailed a file with protected health information (PHI) to his personal email address. It was not clear whether the information had been misused.
The personal information on the stolen file included Medicare ID numbers, Medicare contract numbers, health plan ID numbers, and dates of enrollment of 18,580 customers. The last names and dates of birth of some customers were also included.
LaunchPoint contacted these individuals and provided them with two years of free credit monitoring and identity theft restoration services.
How did the data breach become possible?
LaunchPoint faced a data breach because of an employee who had access to PHI and was able to send this data via email to a personal account.
With Anthem, the sad irony is that the company had made considerable investments in its security system (more than $230 million) after a 2015 cyber attack.
Unfortunately, all these security improvements couldn’t help in this situation.
Anthem should have selected a third-party vendor with a higher level of security and the necessary certifications. Also, they should have regularly audited LaunchPoint’s (or any other vendor’s) ability to enforce their security policy and monitor their employees.
How did the data breach affect the company?
Financially, the 2017 data breach impacted LaunchPoint. But the media got hold of the story and Anthem’s reputation was damaged even more.
4. Fresenius Medical Care of North America and its weak security policy
Fresenius Medical Care of North America, a provider of products and services for people with chronic kidney failure, was hit with a series of small data breaches because of failure to comply with data security regulations.
In this example of an insider threat, the total amount of data released to an untrusted environment was very small compared to many other healthcare data breaches. All told, only 521 people were affected. But there were five breaches in one year! Fresenius Medical Care failed to implement security policies and procedures to protect equipment from theft and encrypt patient information.
- In the Florida department, two unencrypted desktop computers were stolen during a break-in. The PHI of 200 people was stored on one of them.
- A laptop with information of 10 individuals along with all of their passwords was stolen from a parked car of a Fresenius employee in Georgia.
- In Illinois, one encrypted laptop and three desktop computers (one of them with PHI of 31 people) were stolen.
- In Alabama, an unencrypted USB drive with information on 245 individuals was stolen from an employee’s car.
- A hard drive with information on 35 individuals was stolen during technical service in Arizona.
All these incidents happened in 2012. The settlement with the Office for Civil Rights was announced in 2018.
Why did these breaches happen?
In all of these Fresenius Medical Care data breaches, the insider threat was caused by negligence.
Local entities failed to conduct a proper risk analysis and implement security policies and procedures, and some didn’t encrypt electronic protected health information (ePHI) when it was necessary to do so.
Employees didn’t follow industry security standards to protect sensitive data on their devices.
How did it affect the company?
Fresenius Medical Care of North America had to pay a settlement of $3.5 million to the US Department of Health and Human Services (HHS) Office for Civil Rights (OCR). The OCR Director pointed out that companies have to be careful about their internal policies and procedures.
Fresenius Medical Care was also obliged to improve their corporate security according to OCR recommendations, entailing additional costs.
The mandatory corrective plan by OCR includes:
- Conducting an accurate risk assessment
- Developing a risk management plan
- Creating an encryption report
- Reviewing device and media controls as well as physical facility access controls
- Providing employee security training
5. AMSC and its Chinese competitors
This is a very demonstrative case, as it shows a scary trend of Chinese companies hunting for trade secrets, particularly in the USA. Sometimes it happens with the support of the Chinese government. Often, there’s a big fuss but no real industrial espionage or not enough evidence, as in cases with Apple engineers.
In this case, it was proven that the United States-based energy technology company AMSC (formerly American Superconductor Inc.) suffered big losses because of data theft.
A former employee, Dejan Karabasevic, stole his employer’s trade secrets and sold them to Chinese company Sinovel for $20,000. He was also promised a 6-year $1.7 million contract.
Karabasevic was the head of the automation engineering department at AMSC and often made business trips to China. He accepted an offer from the company’s competitor to download the source code of turbine software from an AMSC computer.
In March 2011, Karabasevic submitted a resignation notice, but he retained access to AMSC systems for several months. Later, during the investigation, it appeared that Karabasevic had exchanged messages with Sinovel employees discussing the code and sent it via email during that time.
Sinovel compiled the software from the source code and copied it to several wind turbines it ordered in Massachusetts. Luckily, the representatives of the company who built the turbines noticed the strong resemblance between the Sinovel code and the original AMSC software. They notified AMSC about it and helped with the investigation.
How did the breach and technology theft happen?
Karabasevic had privileged access, as he occupied an important position and was often on business trips to China. Monitoring his account could have deterred him from committing the crime or have helped security officers to notice it in a timely manner. Today, it’s common practice to be extra cautious about intellectual property if you have Chinese business partners or travel to China.
How did it affect the company?
After the intellectual property theft, AMSC suddenly lost its biggest client, Sinovel Wind Group, who had provided three-quarters of the company’s revenue. Sinovel rejected a shipment of electronic components for wind turbines it had ordered and refused to pay for it.
AMSC’s revenue fell dramatically, and its stock dropped by 40% in just one day, then further declined by 84% in several months. AMSC lost more than $1 billion in shareholder equity and about 700 jobs, as many employees were there for the Sinovel project.
For more than six years, AMSC sought justice. Only in January 2018 did a court impose a $1.5 million statutory fine on the Sinovel, as well as to pay $57.5 million to AMSC and $850,000 to additional victims.
Best practices to prevent data breaches caused by insiders
What can we learn from these data breaches?
Now that we’ve explained why all these data breaches were possible, let’s summarize the security practices a company should follow to mitigate risks of data breaches caused by insiders.
The first thing any organization should start with is building a comprehensive insider threat program.
Such a program is the core of a security strategy and is required by data security regulations in many countries. An insider threat program includes crucial steps to prevent, identify, and remediate insider attacks.
However, we would like to underline some best practices closely related to the examples of insider threats we just described.
These best practices would have been useful in the cases of insider threats described above.
- User activity monitoring
User activity monitoring software allows you to set up alerts for suspicious user actions. These actions could be downloading an unusual number of files from a server (Waymo) or downloading and copying the source code of a program (AMSC).
Even when monitoring user activity isn’t enough to prevent crime, it can provide important evidence during an investigation.
- Privileged user monitoring
Privileged users and system administrators have both access and the technical ability to steal intellectual property and conceal it. Strong access controls and the fact of monitoring itself may not only help to detect malicious activity but to deter employees from engaging in it.
- User and entity behavior analytics
User and entity behavior analytics (UEBA) helps to identify potential cyber crime caused by insider threats by detecting abnormal user activity in a particular account. UEBA is a machine learning module that learns a user’s typical pattern of behavior from system logs and other data. Then it identifies meaningful anomalies.
- Two-factor authentication
Two-factor authentication entails verifying users’ identities using a combination of two out of three factors:
- Something they know (a password)
- Something they have (a device to receive a confirmation code)
- Something they are (biometric factors)
Two-factor authentication effectively prevents former employees from accessing both their former accounts and the accounts of their colleagues.
- Third-party monitoring
Ensure that your third-party vendor either does not have access to sensitive information or that their security system is good enough. Find out whether they conduct regular risk assessments, have a proper security policy, and enforce it. Check their security certifications. Also, make sure that your third-party vendors are familiar with and strictly follow your company’s cybersecurity policy.
- Compliance with IT requirements
This is especially important for healthcare and financial institutions. Get familiar with the IT regulations for your industry and how to comply with data security requirements. The data breaches related to Fresenius Medical Care could have been easily avoided if security standards were met.
Data breaches caused by insiders can happen with all kinds of companies, but you can minimize the risk by building a comprehensive insider threat program.
The best practice to prevent insider threats is to monitor your privileged employees and third-party vendors. It’s also important to make sure your former employees can’t use their credentials to access sensitive data.
Finally, you should be ready to conduct an investigation and properly react to this scourge of our time with an incident response plan.
The Ekran System platform provides tools to effectively follow the best practices for insider threat protection.