Healthcare, Twitter and Big Brother all Walk into a Bar...
I’ve always been of the opinion that anything I disseminate via social media is pretty much fair game, and I try to play by the golden rule of “If you don’t want it used against you in a court of law, don’t tweet it, post it, link it, pin it, etc.” I also am well aware that I generate “big data” whenever I use my smart phone, and some entity, somewhere is mining that data for commercial purposes. So I wasn’t too surprised to read of a recent legal entanglement Twitter has gotten involved in up North.
Perhaps others have come across details around the recent ruling of a New York judge that forces Twitter Inc. to turn over an Occupy Wall Street Protester’s tweets. I won’t go into too many details (you can read them here), but the gist of the ruling comes from a case in which prosecutors say the demanded tweets could show whether the protester was aware of police orders he’s charged with disregarding.
A Twitter spokesperson conveyed disappointment with the ruling, adding “Twitter's Terms of Service have long made it absolutely clear that its users own their content. We continue to have a steadfast commitment to our users and their rights."
I for one don’t really buy into the theory that “users own their content,” at least in so far as that “ownership” means that content can never be used against me. I’ve seen too many episodes of “The Wire” to doubt the reach of government when it comes to gathering data for purposes of prosecution.
I wonder if the folks at OpenQ have kept a close eye on this case. The company recently released SafeGuard, social compliance software that “enables companies to embrace social enterprise platforms with proactive risk identification, classification and management,” according to a recent press release. The release also adds that the new software “collects activity feeds, posts and documents from social platforms, and other enterprise interactions, to proactively identify and classify business and compliance risk. An intuitive interface enables the efficient management of compliance cases with classification of risk level according to industry driven and company-defined priorities.”
A separate story on the new software gives it a healthcare angle, citing the increase in use of social media by physicians and hospitals, and thus the increasing need for the monitoring of that usage for non-HIPAA-compliant posts/updates. I think it’s safe to say that this soft of healthcare IT aims to help curb expenses related to HIPAA-related lawsuits that might arise from errant tweets, but I’m a bit confused as to whether the technology monitors social media usage solely by the customer, or lumps in any mention of customer-indicated keywords. And if it does monitor posts from third parties, will customers be savvy enough to follow up with those that post negative comments (most likely disgruntled patients) in such a way that they protect their brand and offer solace to the patient?