Primary tabs

Suicide and its prevention is becoming an increasingly prevalent health concern. Fatal self-harm is the leading cause of death for men over 45, and its incidence among younger people is also on the rise.

Despite this worrying trend, clinical models and tools have proved ineffective when it comes to identifying those at risk of suicide. This may be because the vast majority of people who are suicidal are not likely to admit it, least of all to mental health professionals. Therefore, doctors are often oblivious to possible necessary healthcare measured when presented with a suicidal patient, especially one without a history of prior attempts or explicit ideations.

 

 

Machine learning could help

A recent study has shown that machine learning can discriminate between those who harbour suicidal ideations and those who do not. This was achieved through the analysis of MRI data from suicidal participants and normal, healthy control participants. This may represent a more direct and accurate method of detecting suicidal thoughts in those who are still capable of acting as though they are not.

Suicidal ideations are thoughts concerning suicide, which may lead to more concrete plans and eventual actions on them. There are some psychological tools, including the Suicidal Ideation Questionnaire, that measure these ideations, but these require an honest response to survey-type questions on certain areas of mental health, such as anxiety, depression and hope for the future.

Some of these psychometric tools have exhibited some drawbacks when applied across various demographics. Cognitive neuroscience, on the other hand, may have more potential in verifying suicidal ideation. This discipline can apply and analyse brain scans, such as those acquired through non-invasive magnetic resonance imaging (MRI).

MRI analysis has shown that the brains of those with mental health conditions may respond differently to certain concepts compared to healthy people of matched age and gender. Therefore, it is possible to distinguish people with challenged mental health. However, imaging techniques such as MRI can give hundreds of thousands of data points from the same individual. This is where AI, with its ability to process large amounts of information, comes in. AI for cognitive neuroscience can also ‘learn’ to see the same differences between ‘healthy’ and ‘non-healthy’ brains in new patients.

A team of researchers from psychiatry and psychology departments at Harvard, Columbia University, Florida International University, the University of Pittsburgh School of Medicine and Carnegie Mellon University set out to apply machine-learning techniques to the identification of suicidal individuals among a group that also included healthy controls. The team identified concepts, which consist of single words that define them written down and shown to participants in MRI machines for three-second intervals, that result in neurological differences that show up in scans, thus discriminating between healthy people and those with suicidal ideations. These were three ‘positive’ concepts, (‘praise’, ‘good’ and ‘carefree’) and three ‘negative’ concepts (‘trouble’, ‘cruelty’ and ‘death’).

The team used Gaussian Naive Bayes (GNB) algorithms to analyse a pre-existing archive of these neurological responses, known as ‘neurosemantic signatures’, from both suicidal and healthy individuals. They then generated brain images also containing these signatures from each participant.

As a result, the AI could tell which of the study’s 34 participants had suicidal ideations (n=17) and which did not. The programme could also identify which participants had attempted suicide in the past (9 of the ‘ideator’ group) and who had not. The algorithm demonstrated an accuracy of 91 percent and 94 percent for each of these tasks, respectively. The results show that machine learning can identify the signatures associated with suicidal ideation. The study may also support previous work, indicating that people who have attempted suicide exhibit different neurocognitive responses to certain concepts compared to those with no history of such attempts.

The GNB algorithm was able to place 15 participants with ideations and 16 controls into the correct category, with a specificity of 0.94, a sensitivity of 0.88, a positive predictive value (PPV) of 0.94 and a negative predictive value (NPV) of 0.89. It did this by analysing specific, concept-related activity in cingulate, frontal, parietal and temporal regions of the brain. These areas are strongly associated with how people perceive themselves. When applied to detecting which participants from the ideator group had a history of actual attempts, the AI only mis-classified one participant, and exhibited a specificity of 0.88, a sensitivity of 1.00, a PPV of 0.90 and a NPV of 1.00. The concepts of ‘death’ and ‘carefree’ (and also ‘lifeless’) were most effective in discriminating between these two sub-groups.

Linking specific emotions to suicidal thoughts

The team published a paper on their work in a recent issue of Nature Human Behaviour. They also documented an analysis of the signatures in terms of other signature types associated with specific emotions. For example, the concept of ‘death’ elicited significantly more ‘shame’ for ideators compared to controls. The sub-group with a history of attempts also had a significantly weaker link between ‘death’ and ‘sadness’ compared to the non-attempting sub-group. The algorithm could also be applied to this analysis, and was able to distinguish between controls and ideators based on the connection between emotion-related signatures (‘pride’, ‘shame’, ‘sadness’ and ‘anger’) and the original six as above alone. It did this with an accuracy of 85 percent, a specificity of 0.88, a sensitivity of 0.82, a PPV of 0.88 and a NPV of 0.83.

This study may demonstrate that MRI imaging and analysis are effective neurocognitive tools in detecting suicide risks or tendencies in patients. This also requires the use of machine learning, which can use certain neurosemantic signatures to detect ideations with high accuracy and sensitivity. On the other hand, this study involved a group of patients who were willing to self-identify as suicidal, which may be relatively rare and make any validation of the results as reported difficult.

In general, this study may build on and confirm earlier research that indicates the differential emotional and cognitive responses between ideators and non-ideators. However, its implementation into practice may have a variety of practical and ethical problems, including whether clinicians be able to recommend restrictive care, such as confinements in mental health facilities, based on brain scans in the future? Studies such as these though, certainly do suggest that it may be time to consider the role of cognitive neuroscience in healthcare.

Top image: Message from artificial intelligence..(CC BY 2.0)

References:

Batterham PJ, Ftanou M, Pirkis J, Brewer JL, Mackinnon AJ, Beautrais A, et al. (2015) A systematic review and evaluation of measures for suicidal ideation and behaviors in population-based research. Psychological assessment. 27:(2). pp.501-12.

Just MA, Pan L, Cherkassky VL, McMakin DL, Cha C, Nock MK, et al. (2017) Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nature Human Behaviour.

Deirdre's picture

Deirdre O’Donnell

Deirdre O’Donnell received her MSc. from the National University of Ireland, Galway in 2007. She has been a professional writer for several years. Deirdre is also an experienced journalist and editor with particular expertise in writing on many areas of medical science. She is also interested in the latest technology, gadgets and innovations.Read More

No comment

Leave a Response