FBI beats Apple, cracks into terrorist's iPhone, proves encrypted data isn't safe
The FBI broke into a terrorist's iPhone against Apple's wishes. And while that may make it tempting to think the contentious matter is now resolved, it's actually anything but over.
Hospital executives wondering what it all means for employees who use iPhones, sanctioned or otherwise, should take note: Encrypted data is not as safe from criminals as many perhaps presumed.
And that goes for Android and other devices, too.
Still secret: How the FBI broke in
The U.S. Federal Bureau of Investigations asked Apple for access to the iPhone of Syed Rizwan Farook, one of two terrorists who struck a holiday party in San Bernardino, California, in December 2015. Citing the need to protect personal information and its own encryption systems, Apple CEO Tim Cook refused the FBI's request.
The FBI pursued Apple via the U.S. Department of Justice until, on Monday, the government said it was dropping the case because it had gotten into Farook’s iPhone. FBI Director James Comey told reporters that even though security specialists maintained that a technique called NAND Mirroring was the best way to break into an iPhone, that approach did not actually work.
[Read more Innovation Pulse columns]
The FBI and DOJ have thus far said only that a third-party helped them break in, without naming any particular company or individual. Rumors are swirling that data forensics experts working for an Israeli entity, Cellebrite, were involved. But no proof has emerged.
Without knowing precisely how the FBI got into Farook’s iPhone, the question remains whether the process can be repeated on a widespread basis and what level of sophistication is required to do so.
But it’s a safe bet that organized criminals, nation states and hacktavists are already working on ways to ape whatever the FBI accomplished, assuming they’ve not figured it out already.
Destined to happen again
Another reasonable gamble is to think the FBI will repeat this in the future if it feels justified in gaining access to data within a device.
Before taking matters into their own hands, after all, the bureau and DOJ pressed legal action against Apple in what many Americans thought would be setting a bad precedent for information and personal security.
Authorities just this month revealed in a warrant that they circumvented security controls in a Samsung Galaxy S5 that belonged to Aws Mohammed Younis Al-Jayab, a 23-year old man who is suspected of having ties to ISIS.
If the FBI can turn this practice into a repeatable process, criminals predictably will be able to as well.
HIPAA not even close to enough
The idea that security-by-HIPAA-compliance is not enough to protect patient data from these new threats is no revelation. CIOs, CISOs, privacy officers and others have been making that case for some time now.
Compliance is important, of course, because when devices are lost or stolen healthcare organizations do not have to issue breach notifications. In most cases where providers cannot prove the data was encrypted, however, the evidence only shows that unencrypted data was lost or stolen, not that it was actually accessed.
Instead of looking at encryption as a get-out-of-jail-free card under HIPAA, healthcare executives and information security specialists should be girding for the possibility that encrypted mobile devices can be cracked and the data therein put to nefarious use.
And when criminals can essentially steal an encrypted device and crack in, smart hospital leaders will expect them to target specific organizations, if not individuals, just as they have been doing in the ongoing stream of ransomware pursuits.
Is it even possible to protect PHI and PII against such specific and targeted attacks? If so, what will that take?