web analytics

Can artificial intelligence replace human healers?

Some experts believe that AI can make treatment more accessible and more accessible. There has long been a shortage of mental health professionals, and since the Covid epidemic, the need for support is even greater. For example, users can have AI-enabled conversations and discussions, allowing them to receive help anytime, anywhere, usually at a lower cost than traditional medicine.

Algorithms that support these efforts are studied by combining large amounts of data generated by social media, smartphone data, electronic health records, medical session transcripts, brain scans and other sources to identify patterns that are difficult for humans to understand.

Despite the promise, there is another major concern. The performance of some products is questionable, the problem is exacerbated by the fact that private companies do not always share information on how their AI works. Problems with accuracy raise concerns about increasing negative counseling for people who may be at risk or who may not be able to think critically, as well as fear of further racial or cultural discrimination. Concerns continue with private information being shared in unexpected ways or with unintentional individuals.

The Wall Street Journal hosted an email interview with Google Doc about these issues with John Torous, director of the digital-psychiatry unit at Beth Israel Deaconess Medical Center and an assistant professor at Harvard Medical School; Adam Miner, who teaches at Stanford School of Medicine; and Zac Imel, professor and director of clinical training at the University of Utah and founder of LLYSSN.io, a company that uses AI to evaluate psychotherapy. Here is an edited text for the discussion.

Moving forward

WSJ: What is the most exciting method of AI and machine learning used to diagnose mental disorders and improve treatment?

Drs. MINER: AI can speed up access to appropriate services, such as emergency response. The current Covid epidemic is a powerful example where we see that AI has the potential to help achieve and reduce, while also posing the risk of anonymity and inadequacy. This challenge – deciding which interventions and information we should support – is a challenge for both epidemics and mental health care, where we have a wide range of treatments for a variety of problems.

Drs. IMEL: In the near future, I am very happy to use AI to enhance or guide therapists, such as providing feedback after a session or providing tools to support your self-expression. Slow-to-phone apps that run on the back of users ‘phones and try to monitor users’ situations] can be exciting if they predict the latest changes in stress and suggest early intervention. Also, researching remote sensing into addiction, using tools to determine if a person might be at risk of relapsing and improving interventions or coping skills, is exciting.

Drs. TOROUS: In previous research, AI could help us unravel some of the problems of the brain and work to better understand these diseases, which could help us provide new, more effective treatments. We can generate a huge amount of information about the brain from genetics, neuroimaging, cognitive testing and now even smartphone signals. We can use AI to find patterns that can help us open up why people develop mental illness, respond well to certain treatments and may need immediate help. Using new data in conjunction with AI may help us open up opportunities to create new and personalized therapies and protections.

WSJ: Do you think that automated systems that use AI-driven conversations are therapeutic?

Drs. TOROUS: In a recent paper I wrote, we looked at the latest chatbot books to see what the evidence says about what they actually do. All in all, it was clear that while this view is positive, we do not yet see evidence in support of marketing claims. Many studies have problems. They are small. It is difficult to get used to patients with mental illness. They look at possible outcomes instead of the final improvement areas of treatment. And many studies do not include a control group to compare results.

Drs. DEPARTMENT: I ​​don’t think it’s a “we compared them, human beings compared to AI” situation with chatbots. The background is that we, as a community, understand that we have problems with real access and that some people may not be ready or able to get help from someone. If the interviews show that they are safe and effective, we can see the world in which patients have access to treatment and decide when and when they want someone else to join them. Clinics will be able to spend time where they are most helpful and they want.

WSJ: Are there cases where AI is more accurate or better than human psychologists, therapists or psychiatrists?

Drs. IMEL: Right now, it’s very hard to imagine replacing human therapists. Talking about AI is not good for things we take for granted in human conversations, such as remembering what was said 10 minutes ago or last week and responding appropriately.

Drs. DEPARTMENT: This is definitely where there is joy and frustration. I don’t remember what I ate for the past three days, and the AI ​​system can remember the whole Wikipedia in seconds. With the power of raw processing and memory, it’s not even a competition between people and AI programs. However, Dr. Imel’s point is important in conversation: The things people do without effort in conversations right now are more than just a very powerful AI system.

A always available AI system that can hold thousands of simple conversations simultaneously can create better access, but the quality of conversations can suffer. That’s why companies and researchers view AI-interaction with people as the next appropriate step.

Drs. IMEL: For example, research shows that AI can help “rewrite” text statements to be more sympathetic. AI does not write a statement, but is trained to help the listener perhaps to tweak.

WSJ: As technology improves, do you see chatbots or smartphone apps uninstalling any patients who might seek help from a therapist?

Drs. TOROUS: As more people use apps as an introduction to care, it may increase awareness and interest in mental health and the need for personal care. I’ve never met a single doctor or psychiatrist who’s concerned about losing business in apps; instead, app companies are trying to hire more therapists and psychiatrists to meet the growing need for nurses who support applications.

Drs. IMEL: Psychiatry is very similar to teaching. Yes, there are things technology can do to balance capacity building and increase access, but as parents learned in the past year, there is no substitute for a teacher. People are perfect, we are tired and we don’t get along, but we are very good at communicating with other people. The future of professional mental health is not about replacing people, it is about supporting them.

WSJ: What about schools or companies that use apps in situations where they might hire human therapists?

Drs. DEPARTMENT: Another challenge we face is that the deployment of applications to schools and workplace often lacks the rigorous testing we expect from other forms of medical intervention. Because applications can be developed and used very quickly, and their content can change quickly, previous quality testing methods, such as random tests for many years, do not happen if we want to keep up with the volume and speed of application development.

Judicial calls

WSJ: Can AI be used for diagnostics and interventions?

Drs. IMEL: I might be a low person here – building AI to replace current diagnostic practices in mental health is a challenge. Determining whether someone is experiencing a major depressive disorder right now is not the same as finding a tumor on a CT scan – an expensive, hard-working and prone-prone error, even where AI already seems to be useful. Depression is best measured by a nine-point survey.

Drs. DEPARTMENT: I ​​agree that diagnosis and treatment are so confusing that AI still has a long way to go before taking those tasks on a person.

With nerves, AI can mimic symptoms, such as sleep disturbances, stressed speech or other behavioral changes. However, it is not clear whether these values ​​completely take into account the brightness, judgment and context of human decision-making. An AI system can capture a person’s voice and movements, which may be related to diagnoses such as major depressive disorder. But without additional context and judgment, important details can be left out. This is especially important when there are cultural differences that can be reported with behavioral diagnosis.

Ensuring new technologies are designed to raise awareness of cultural differences in language or general behavior it is important to increase trust in groups discriminated against on the basis of race, age, or other identity.

WSJ: Is privacy also a concern?

Drs. MINER: We have enacted laws over the years to protect mental health conversations between people. As apps or other services begin to request to be a part of these conversations, users should be able to expect clarity about how their personal information will be used and shared.

Drs. TOROUS: In a previous study, our team identified smartphone apps [used for stress and smoking cessation] that shared information with commercial organizations. This is a red flag the industry needs to pause and change course. Without trust, it is impossible to provide effective psychological care.

Drs. DEPARTMENT: We do not look down and we do not plan to trust AI in health care, especially mental health. Medical professionals have developed trustworthy procedures and policies, and AI programs may have different rules. The first step is to determine what is important to patients and clinicians about how information is collected and shared in order to disclose sensitive information.

This article was published from a wire agency feed without text editing.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.