Artificial Intelligence has many useful applications – Today on the blog, Shivani tells us a little more about the role of machine learning and discusses the potential (and potential pitfalls!) of AI to help us to better understand, predict and even treat mental health disorders.
Artificial intelligence (AI) is a term used to describe technology that imitates human intelligence which is often achieved through machine learning. Machine learning refers to a system that is able to learn and get better from experience without having to be explicitly programmed. This branch of computer science has been used to inform computational psychiatry, a field that combines multiple types of information and computation methods to try and better understand mental health conditions. One approach commonly used by computational psychiatrists is the application of machine learning to clinical populations. This implementation of AI to diagnose, predict and treat mental health conditions is an exciting and emerging field which could help relieve the burden of mental illness for individuals suffering, and for our health care system (It has been estimated that the global cost of treating mental health illnesses such as psychosis is greater than the cumulative cost of diabetes, cancer and respiratory disorders; IBM Blog Research, 2018)
“This method of automated speech analysis proved to be 100% successful in predicting the development of psychosis in this group”
Early research efforts have used AI to predict individual’s outcomes by analysing the way people communicate. Bedi and colleagues carried out some research in 2015 that involved using AI topredict the onset of psychosis in youths. In this study, individuals who were deemed high risk for developing psychosis were interviewed regularly. Their interview transcripts were then analysed using automated semantic analysis. This method of automated speech analysis proved to be 100% successful in predicting the development of psychosis in this group, and has the potential to be applied as an objective clinical test. IBM research has projected that moving forward, AI will be used to analyse writing and speech, which can be used to predict an individual’s prospect of developing a mental illness. This would be implemented through the use of a mobile phone compatible speech analysis app and could provide early diagnoses for individuals.
In addition, AI has also been successfully used in other ways to develop algorithms which can predict poor mental health. This is shown in Cerasa et al’s 2017 research which used AI as a way of predicting gambling disorder in individuals. The researchers in this study employed classification and regression tree algorithms which automatically selected variables that distinguished whether or not participants went on to develop a gambling disorder. This study found that its machine learning approach was 80% successful in discriminating between controls and patients which demonstrates the efficacy of using AI in this research (Cerasa et al, 2017).
“Stigma still exists today and gets in the way of people seeking treatment”
AI has also been used as a way of delivering treatments for individuals suffering from mental health illnesses. With mental health disorders affecting approximately 1 in 4 people, it can be difficult for sometimes overstretched mental health services and clinicians to get everyone who is suffering the treatment that they need, when they need it (mind.org.uk), and sometimes treatment interventions do not last as long as users need them. In addition, despite a lot of hard work from many organisations and individualso tackle the stigma surrounding mental health issues (if you are looking for some great examples of campaigns, stories and individuals tackling stigma see here, here , here and here and here), this stigma still exists today and gets in the way of people seeking treatment. For these reasons amongst others, artificial intelligent care providers (AICPs) have become increasingly popular. AICPS are online agents which simulate psychotherapists by building therapeutic relationships with individuals. Research has shown that patients were more likely to disclose personal details to AICPs than to an actual clinician which could result in improved treatment outcomes and adherence (Miner, Milstein & Hancock, 2017). In addition, AICPs can deliver appropriate resources to users’ phones, speeding up response times and mitigating potential risks that could arise from delayed treatment (D’Alfonso et al, 2017). Personalised therapy chatbot apps are online help agents that can act as virtual therapists; regularly asking users how they feel and about their mood. This encourages users to express their feelings. Through semantic analysis, these conversations can be analysed by clinicians, giving them access to more layers of information which can help them to better understand their patient, and possibly inform diagnoses and treatment approaches.
“AI assisted treatment can bridge the gap between symptom onset and being seen by a professional”
However, despite the innovative implementation of artificial intelligence in mental health, AICPs cannot fully emulate the therapeutic relationship between a patient and clinician. Furthermore, the idea of AI taking the place of a trained professional raises significant ethical concerns. AI machines are not responsible agents but when used to treat patients, they need to be able to make decisions that are in the best interest of the patient whilst ensuring patient confidentiality and data protection (Luxton, 2018). Moreover, it is not clear who is responsible for AICPs and how they ensure that they are always up to date?
The application of AI to mental health is an exciting and innovative area which aims to embrace digitization by using technology to address poor mental health. AI assisted treatment can bridge the gap between symptom onset and being seen by a professional, however it cannot be expected to replace clinician delivered treatment.
Adrienne lafrance. (2018). The Atlantic. Retrieved 28 January, 2018, from https://www.theatlantic.com/technology/archive/2015/08/speech-analysis-schizophrenia-algorithm/402265/
Bedi, G., Carrillo, F., Cecchi, G., Slezak, D., Sigman, M., Mota, N., Ribeiro, S., Javitt, D., Copelli, M. and Corcoran, C. (2015). Automated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophrenia, 1(1).
D’Alfonso, S., Santesteban-Echarri, O., Rice, S., Wadley, G., Lederman, R., Miles, C., Gleeson, J. and Alvarez-Jimenez, M. (2017). Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health. Frontiers in Psychology, 8.
IBM Blog Research. (2018). IBM 5 in 5: With AI, our words will be a window into our mental health – IBM Blog Research. [online] Available at: https://www.ibm.com/blogs/research/2017/1/ibm-5-in-5-our-words-will-be-the-windows-to-our-mental-health/ [Accessed 28 Jan. 2018].
Luxton, D. (2018). Recommendations for the ethical use and design of artificial intelligent care providers. (Luxton, 2018)
Miner, A. .S, Milstein, A & Hancock, J. .T. (2017). Talking to Machines aboout Personal Mental Health Problems. Journal of American Medical Association, 318(13), 1217-1218.
Schueller, S., Stiles-Shields, C. and Yarosh, L. (2017). Online Treatment and Virtual Therapists in Child and Adolescent Psychiatry. Child and Adolescent Psychiatric Clinics of North America, 26(1), pp.1-12.