Cybersecurity: best practices for fighting insider threats

It's not just employees: It's 'contractors, subcontractors, suppliers, trusted business partners – anyone you would give authorized access to'
By Mike Miliard
10:34 AM
Shield with computer code

Since the first Internet worm in 1988, Randy Trzeciak, technical director of the CERT Insider Threat Center at Carnegie Mellon University's Software Engineering Institute, has been on the front lines of cybersecurity.

With more than 25 years' experience in software engineering, information security and database design, he's hand an up-front seat for the evolution of security threats over the past few decades. Since 2001, Trzeciak and his colleagues at CERT have been researching the insider threats; to date it has collected and analyzed more than 1,100 incidents where insiders have intentionally or un-intentionally harmed an organization.

From that insight, he's identified patterns of technical and non-technical behaviors organizations could integrate into their insider threat anomaly detection capabilities. At the Healthcare IT News Privacy & Security Forum in Boston on Dec. 1, he'll discuss those – spotlighting the different types of threats posed by insiders and describing best practices for mitigating them.

Trzeciak recently offered Healthcare IT News a glimpse at some of those strategies.

Q: What is CERT and what is its mission?

A: The CERT program is a program in the software engineering institute at Carnegie Mellon University, and the Software Engineering Institute is a federally-funded research and development center. We do research on behalf of the federal government with the mission to transition that across the DoD, the federal government and into industry across the United States. The SEI has done research for a number of years into software and software security and has developed programs such as the Capability Maturity Model, risk analysis and risk assessment processes.

The CERT program was stood up in 1988 on the heels of the first Internet worm, the Morris worm. The federal government wanted to have one place to do emergency response to then the ARPANET, which was up and running between the Department of Defense, the federal government and academia. So they set up the CERT program to do research on hard cybersecurity problems. And as an FFRDC our mission is to do research and transition that across all those broad spectrums of communities of interest. Really what we're in the business of doing is researching for the government, and specifically not intending to compete with industry or government: We are supposed to be a trusted broker on behalf of the feds on these hard cybersecurity problems.



Q: When you look back to 1988, and what you considered then to be hard cybersecurity problems, could you have imagined what things would look like in 2015?

A: It certainly is a challenge from the perspective that the first ARPANET was stood up to be a secure communication stream between the federal government and academia. And envisioning where that was at that point in time to now, where the Internet is a commodity where organizations and individuals expect constant communication with a number of services, and it's gone way beyond the original intent of the designers to be a secure communication stream. The reliance on the Internet in everyone's day-to-day life is more than they envisioned, in my opinion.

[Learn more: Meet the speakers at the HIMSS and Healthcare IT News Privacy and Security Forum.]

And certainly given the fact that the original ARPANET was designed specifically for communication with a small user community, owned primarily by the federal government and the  academia partners they were communication with, now the Internet is broader than just that small user community and not owned by any one government. 


Q: Healthcare has had a bit of a rude awakening these past couple years when it comes to cybersecurity. If it's not cyber attacks from sophisticated nation-states or homegrown "hacktivists," it's their own employees that organizations have to worry about. How do you define an "insider threat"?



A: We do have formal definitions of what we consider insider threats. There are two different components. We want to differentiate between an insider, and a threat posed by insiders. All too often, organizations are willing to describe the word insider inherently with the threat that they pose. Typically, the insiders are those given authorized access. Those individuals could threaten the key critical assets of an organization. So we describe it as an insider who's been given insider access, and we try to differentiate between employees and former employees, but also to expand it to mean contractors, subcontractors, suppliers, trusted business partners – anyone you would give authorized access to.

And then what they could possibly do is threaten the confidentiality, integrity or availability of an organization's critical assets.

So again, it's important to differentiate between an insider, the threats posed by insiders, and an insider who betrays the trust of the organization and intends to cause harm through malicious intent: modifying, updating, deleting or rendering unavailable the key critical assets of an organization.
So we define malicious insiders, but also want individuals who don't have malicious intent but still could cause harm as well. It's the accidental insider who causes harm, the unintentional insider threat could possibly cause harm as well.

Q: From your vantage point are you able to assess how healthcare is doing, managing the risk of insider threats?

A: From the standpoint of the work CERT has done in the industry, we have been involved in raising awareness of the threats that could be posed, but my team has not been involved in doing insider threat program evaluations of insider threat vulnerability assessments. So we don't necessarily have the data that says whether organizations are doing well in that domain or not.