An artificial intelligence (AI) programme trained to scour Facebook posts for “linguistic red flags” which could be a sign of depression identified the condition up to three months earlier than health services, a US study has found.

In early tests the machine learning algorithm performed as well as existing screening questionnaires which are used to identify depression – but it has the advantage of being able to run “unobtrusively” in the background, the authors note.

Recently a backlash against platforms like Facebook, from ministers and parents concerned about the damaging impact on children wellbeing, has led to calls for stricter age and usage limits.

But US researchers behind the new tool tool say the wealth of information in social media pages could one day be used to help unobtrusively screen for mental health conditions. 

These early warning signs include mentions of loneliness or isolation, such as “alone”, “ugh” or “tears” as well as the timing and length of posts. Other tell-tale clues, include an increase in the use of first person pronouns – like “I” and “me” – which “suggest a preoccupation with the self” in public posts, the authors write.

“Social media data contain markers akin to the genome,” said Dr Johannes Eichstaedt, one of the senior authors of the study and a co-founder of the World Well-Being Project at the University of Pennsylvania.

“With surprisingly similar methods to those used in genomics, we can comb social media data to find these markers.

“Depression appears to be something quite detectable in this way; it really changes people’s use of social media in a way that something like skin disease or diabetes doesn’t.”

This method of screening could increase the likelihood of conditions being diagnosed, and treated, early, minimising the impact of depression on education, work and relationships.

In a study published in Proceedings of the National Academy of Sciences on Monday, Dr Eichstaedt and fellow authors from the Penn Medicine Centre for Digital Health used data from the Facebook profiles of 683 people who had consented to share their digital archives.

This group included 114 people who had been diagnosed with depression, and each one was matched with five people without a depression diagnosis to test the programme’s accuracy.

By analysing 524,292 posts made by the participants on Facebook in the years prior the diagnosis of depression and compared them with the control subjects the team identified “depression-associated language markers”.

When primed with these markers the programme was able to identify warning signs of depression in individuals from posts up to three months prior to this being recorded in their medical records.

The study found their programme was most accurate using social media cues in the six months prior to a depression diagnosis, and could help flag the onset of depression in people at risk particularly when working with other forms of digital screening.

“There’s a perception that using social media is not good for one’s mental health,” H. Andrew Schwartz, an associate professor of computer science and principle investigator of this study. ”But it may turn out to be an important tool for diagnosing, monitoring, and eventually treating it.”

While this was a small proof of principle study, it could be refined in several ways by incorporating phone usage data, or facial recognition software to analyse pictures posted on Facebook, the authors add.

Comments

Share your thoughts and debate the big issues

Learn more
Please be respectful when making a comment and adhere to our Community Guidelines.
  • You may not agree with our views, or other users’, but please respond to them respectfully
  • Swearing, personal abuse, racism, sexism, homophobia and other discriminatory or inciteful language is not acceptable
  • Do not impersonate other users or reveal private information about third parties
  • We reserve the right to delete inappropriate posts and ban offending users without notification

You can find our Community Guidelines in full here.

  • Newest first
  • Oldest first
  • Most liked
  • Least liked
Loading comments...
Please be respectful when making a comment and adhere to our Community Guidelines.

Community Guidelines

  • You may not agree with our views, or other users’, but please respond to them respectfully
  • Swearing, personal abuse, racism, sexism, homophobia and other discriminatory or inciteful language is not acceptable
  • Do not impersonate other users or reveal private information about third parties
  • We reserve the right to delete inappropriate posts and ban offending users without notification

You can find our Community Guidelines in full here.

  • Newest first
  • Oldest first
  • Most liked
  • Least liked
Loading comments...