Jump to content

Social Networks May One Day Diagnose Disease—But at a Cost - General Hangout & Discussions - InviteHawk - Your Only Source for Free Torrent Invites

Buy, Sell, Trade or Find Free Torrent Invites for Private Torrent Trackers Such As redacted, blutopia, losslessclub, femdomcult, filelist, Chdbits, Uhdbits, empornium, iptorrents, hdbits, gazellegames, animebytes, privatehd, myspleen, torrentleech, morethantv, bibliotik, alpharatio, blady, passthepopcorn, brokenstones, pornbay, cgpeers, cinemageddon, broadcasthenet, learnbits, torrentseeds, beyondhd, cinemaz, u2.dmhy, Karagarga, PTerclub, Nyaa.si, Polishtracker etc.

Social Networks May One Day Diagnose Disease—But at a Cost


Recommended Posts

THE WORLD IS becoming one big clinical trial. Humanity is generating streams of data from different sources every second. And this information, continuously flowing from social media, mobile GPS and wifi locations, search history, drugstore rewards cards, wearable devices, and much more, can provide insights into a person's health and well-being.

It’s now entirely conceivable that Facebook or Google—two of the biggest data platforms and predictive engines of our behavior—could tell someone they might have cancer before they even suspect it. Someone complaining about night sweats and weight loss on social media might not know these can be signs of lymphoma, or that their morning joint stiffness and propensity to sunburn could herald lupus. But it’s entirely feasible that bots trolling social network posts could pick up on these clues.

Sharing these insights and predictions could save lives and improve health, but there are good reasons why data platforms aren’t doing this today. The question is, then, do the risks outweigh the benefits?

A Thought Experiment

Although social media platforms get press for being useful in predicting, and possibly preventing, suicide, the possibility that those platforms could see into the future before a patient has even visited the doctor is, for now, hypothetical. But it’s not far-fetched.

Let’s say Facebook released a large set of de-identified data, such as members’ location, travel, likes and dislikes, post frequency, sentiment, browsing, and search habits. Based on these data, a researcher could build models that predict physical and emotional states.

For instance, a data set consisting of social media posts from tens of thousands of people will likely chronicle the journey that some had on their way to a diagnosis of cancer, depression, or inflammatory bowel disease. Using machine-learning techniques, a researcher could take those data and study the language, style, and content of those posts both before and after the diagnosis. They could devise models that, when fed new sets of users’ data, could predict who will likely go on to develop similar conditions.

And such a system would not need to look only for hard and fast symptoms like fevers or weight loss. Seemingly unimportant and unrelated data—like purchasing anti-nausea medicine or watching a documentary on insomnia—could end up fueling a set of predictive rules that indicate that a user might have a certain medical condition. The point is that our digital trail leaves many clues, both subtle and overt, to our overall health and well-being. How we use those data for good is another issue.

As a clinician, I support integrating data and putting the troves of information to use for society’s benefit. One of the reasons I cofounded Litmus Health, a data science company, was to help researchers better collect, organize, and analyze data from clinical trials, and in turn, use those data to improve health outcomes for society writ large. However, significant regulatory, ethical, technical, and societal considerations require caution.

From a regulatory perspective, all companies bear some responsibility to care for their users’ data, as defined in their terms of service. Unfortunately, what has been exposed in cases like a 2014 Facebook study and in research from Carnegie Mellon is that terms of service and/or privacy policies are overly complicated, no one reads them anyway, and users just blindly sign them.

Companies can demonstrate an ethical “do no harm” obligation to their users by having a straightforward and easy-to-understand data policy, and by not using personal data in inappropriate ways. An ethical framework for big data must consider identity, privacy, data ownership, and reputation. For most firms today, releasing users’ data to build predictive models without their consent would go against their established value systems. But obtaining consent may be as trivial as someone mindlessly clicking through an exorbitantly long terms-of-service agreement.

If companies are going to ask users to share their data and participate in an experiment, they should be more transparent about how the data are collected, used, and shared.

Let’s say a social network has an algorithm that analyzes a user’s activities— things they complain about, articles they share, friends’ posts they like, among other things. The AI could potentially identify a pattern suggesting the presence of a medical condition.

Now imagine being able to link across social networks and also to other available data streams from wearables, sensors, and mobile devices. All of a sudden, the predictive value of these disparate data streams could become very high. For example, posts about headaches and nausea, combined with a gradually decreasing step count on a Fitbit, cell phone GPS data indicating trips to the pharmacy, and typing accuracy demonstrating a slow, almost imperceptible loss of coordination could all portend an ominous condition.

A perfect predictive system might be heralded as a medical breakthrough, but sometimes a typo is just a typo, and most people with headaches and nausea do not have brain tumors.

Using social media cues to help someone recognize that they may have the flu could prompt users to seek testing or treatment, both relatively benign and inexpensive interventions. But a cancer scare suggested under similar circumstances could carry more serious consequences, ranging from emotional trauma to expensive and potentially harmful tests and treatments. When amortized over millions of users, the potential logistical and financial implications for the healthcare system could be enormous. While algorithm-based predictions can be useful and are widely applied in many areas of our lives now, these examples show why these same predictions carry more weight in the realm of health and health care, and therefore their use should be closely governed and monitored for potential benefits and risks

Consumers Should Opt-In

As a clinician, I believe that consumers should be able to freely access the health data they generate across all streams. The benefits far outweigh the risks, and physicians are seeing more and more patients request access to their complete medical records. Patients are taking an active role in their treatment plans; it ought to be medical professionals' jobs to facilitate their ability to do so.

Individuals should be able to opt in to allow providers to collect and track their data for health predictions. Companies would need to carefully determine tracking criteria for specific diseases, and at what point they would notify the user that they are at risk. Once notified, the user would have the option to receive more information or send their data directly to their healthcare provider. For this to work, new data governance and stewardship models will be required, and legal protections for people and their data will become increasingly important.

The people, companies, and organizations that hold private data have a big responsibility. If they're going to use these data to make better predictions about health and disease, then everyone needs to work together to better understand the expectations and responsibilities of all parties. The technical, legal, and social barriers are significant, but the potential for improving people’s health is tremendous.

Dr. Sam Volchenboum (@SamVolchenboum) is the director of the Center for Research Informatics at the University of Chicago, a board-certified pediatric hematologist and oncologist, and the co-founder of Litmus Health, a data science platform for early-stage clinical trials. WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.




Wired
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Customer Reviews

  • Similar Topics

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.