Leveraging AI to Integrate Behavioral Healthcare

Date

Integrated care has great potential to expand access to behavioral health and whole person healthcare but is limited by the behavioral health workforce shortage. But, what if Artificial Intelligence (AI) could help address this challenge? What if AI could be leveraged to empower the existing workforce? This article describes how AI can be used to support behavioral health integration (BHI) using examples of how pioneering practices are implementing AI, discusses findings from research that has been done to date, and explores some of the potential risks that have been raised regarding AI in BHI. 

How Can AI Support BHI?

When thinking about how AI can support BHI, there are two dimensions to consider. First, it’s important to understand what type of AI is being used. There are two categories of AI technologies, both of which have potential utility in the behavioral healthcare space: 

  • Predictive AI analyzes available data to forecast a likely outcome (Schwarze & Boyd, 2025); this technology could be used to scan patient responses and predict their risk for depression or substance use.
  • Generative AI utilizes available data to create new outputs that emulate human-generated ideas with little to no human oversight (Schwarze & Boyd, 2025); this technology could be used to produce clinical documentation of a therapeutic session, based on a transcript or recording. 

The second dimension to consider is the type of support that AI can provide. There are two avenues in which practices are prioritizing AI integration: 

  • Reducing administrative burden through process automation, and
  • Improving the value of behavioral health services through treatment extension. 

Using AI to Reduce Administrative Burden

Primary care clinicians commonly experience burnout, which can be exacerbated by administrative burden. Providers have expressed hope in AI’s ability to lighten workloads. AI integrated within compliance tools can be useful in scanning documentation to ensure validity and prevent kickbacks or denials (National Council for Mental Wellbeing, 2025). Additionally, AI-based notetaking and documentation systems streamline clinical note production by extracting details from a patient’s electronic health record (EHR) (Collaborative Family Healthcare Association, 2025; National Council for Mental Wellbeing, 2025).

The Collaborative Family Healthcare Association’s (CFHA’s) Primary Care Behavioral Health (PCBH) Special Interest Group recently hosted a community conversation on AI’s role in PCBH. Similarly, the National Council for Mental Wellbeing has hosted many webinars highlighting AI in behavioral health. In these sessions, current practitioners and administrators provided real-world examples of how they’ve begun incorporating AI into administrative tasks. These suggestions have promising implications for BHI. 

Using AI as a Treatment Extender

AI holds significant potential to facilitate and extend treatment in integrated care settings. AI is already being used to support: 

  • Screening and risk stratification using strategies such as automated scoring of patient questionnaires and EHR mining to flag at-risk patients (Cruz-Gonzalez et al., 2025);
  • Clinical decision making through AI-generated suggestions for diagnosis, treatment selection, or referral prompts for providers (Golden et al., 2024); 
  • Effective, scalable treatment with digital therapies like chatbots and guided CBT (Wickersham et al., 2022); 
  • Data monitoring and prediction using symptom tracking via smartphone sensors, ecological momentary assessment, and relapse and risk prediction (Cruz-Gonzalez et al., 2025); and 
  • Clinician workflow augmentation through note drafting, outcome measurement dashboards, and personalized treatment recommendations (Olawade et al., 2024).

Early Use of AI in New Products

A myriad of products are already available. Most of these new products are untested but provide inspiration for what might be possible with AI technology. For example, new AI products can provide personalized treatment recommendations, such as:

  • Suggested cognitive behavioral therapy (CBT)-based homework and coping strategies derived from patient data (e.g., Wysa for Clinicians), 
  • Digital CBT modules tailored to presenting concerns (e.g., SilverCloud Health), 
  • Symptom-checkers that guide treatment pathways and triage decisions (e.g., Ada Health), and
  • Treatment planning insights based on therapy session content (e.g., Eleos Health). 

In addition, AI products have been developed to engage assessment and provide data visualization tools for clinicians, thereby improving measurement-based care. These include products that automate popular screening tools and display trends on dashboards (e.g., Blueprint Health) as well as products that track various patient outcomes and provide visualization tools (e.g., Owl Insights, Mirach, Forge/NeuroFlow).

Other AI products provide support and coaching, ideally in collaboration with integrated behavioral health providers. For example, Wysa uses an AI-powered coach to deliver CBT exercises and help users challenge negative thoughts. Many of the supports provided with this tool resemble real-life provider-patient interactions between sessions. Another example is Headspace, an app that provides mindfulness/meditation and sleep content to help with anxiety and stress. This kind of support is consistent with “wellness” support that might be offered in integrated care. 

NOTE: The apps mentioned above are created by for-profit companies. They are listed to illustrate the potential of AI products but are not endorsed by the Integration Academy.

Early Research on AI Innovations

Though many existing AI innovations remain untested, the first studies of AI-facilitated behavioral health interventions show promise for their use in integrated care. While more study is needed, this research provides a starting place for further investigation as well as translation into practice.

  • A clinical study at the University of Wisconsin implemented a hospital-wide clinical decision support intervention using real-time natural language processing (NLP) integrated into the hospital’s EHR to screen inpatient adults for opioid use disorder (OUD) and recommend them for addiction specialist consultation.
    • In a pre-post quasi-experimental study, researchers found the OUD screener was just as effective at triaging patients for addiction medicine consultations, when compared to usual care (Afshar et al., 2025). The screener was associated with a reduction in readmissions; each readmission had an estimated incremental cost of $6801, meaning the screener may have saved almost $109,000 in healthcare expenditures (National Institute on Drug Abuse, 2025).
    • For more information on the University of Wisconsin clinical trial investigating the utility of an AI-based OUD screener, read the National Institute on Drug Abuse’s news release.
  • A University of Michigan study employed a rule-based NLP model to assess clinical notes to identify risky alcohol use in adult patients. The AI model correctly identified 87% of risky alcohol use cases, compared to diagnosis codes alone, which only correctly identified 29% of cases (Vydiswaran et al., 2024).

NOTE: In its Artificial Intelligence Strategy, the U.S. Department of Health and Human Services (HHS) has identified AI’s potential in reducing overdose rates. By increasing AI’s usage in its broader ecosystem, HHS aims to “deliver measurable improvements in both population health and individual patient outcomes.”

Risks Associated with AI

Patient safety is of utmost concern when it comes to leveraging AI in BHI. Identifying the risks associated with using AI is critical to providing safe, effective, and ethical care. Though AI tools have found traction as timesaving assets within behavioral health organizations, providers have expressed a wide array of concerns. Users caution that these systems should be implemented as supplemental tools, requiring human oversight and quality control. (Collaborative Family Healthcare Association, 2025; National Council for Mental Wellbeing, 2025). 

The Utah Office of Artificial Intelligence Policy recently published a guidance letter, ‘Best Practices for the Use of Artificial Intelligence by Mental Health Therapists,’ which presents challenges associated with AI’s use and highlights best practices for mental health practitioners interested in implementing AI tools. 

Only three states (Utah, Illinois, and Nevada) have passed laws regarding AI and mental healthcare, though many more are in the process of drafting or passing legislation. As governance regarding AI in behavioral healthcare is still developing, understanding the potential flaws in an AI tool allows clinicians to use it more safely. 

Accuracy

If input data, or the data that helps the AI learn, contains information that is incorrect, rare, under or overrepresented in the training data, or out of scope of the AI’s intended use, the resulting output may be flawed. AI may even produce output that is nonsensical or incorrect; this is called a ‘hallucination.’ Hallucinations or flawed outputs in AI integrated into a clinical behavioral health setting have the potential to undermine patient care. There are risks associated with AI use even when it delivers accurate and productive results, including threats to loss of emotional and human connection with the patient and lack of sensitivity to a patient’s unique background.

Data Privacy

Data privacy remains a key concern among both providers and patients when making decisions regarding AI’s integration into practice. A recent literature review discusses AI’s privacy challenges and lack of systemic oversight. The authors note that opaque and convoluted patient data management tactics render sensitive patient information vulnerable to data breaches. They stress the criticality of process transparency in data governance frameworks to mitigate exploitation (Williamson & Prybutok, 2024).

Overreliance

Continued use of these technologies can lead to overconfidence or overreliance on the tool (Schwarze & Boyd, 2025). A recent study evaluated the utility of an AI-based system for mental health treatment or referral recommendations in primary care settings. Specifically, the study examined whether the system’s recommendations influenced physician decision-making. The findings suggest that providers may be more likely to change their clinical decisions if their original conclusions and the AI’s recommendations were misaligned (Ryan et al., 2025). The study’s results spotlight the question: how much reliance should providers place on AI assistance in clinical settings, especially as these technologies are in their infancy?

Chatbots and Patient Safety

Recently, AI-powered chatbots have been heavily scrutinized for their potential to harm. Though chatbots are an important part of the conversation surrounding AI and patient safety concerns – particularly regarding concerns of suicidality – exploration of their role in the behavioral healthcare space is currently limited. The Integration Academy will continue to monitor the usage of chatbots across the behavioral health field as new evidence regarding their risks and benefits emerges.

Mitigating the Risks

The Utah guidance letter provides best practices when using AI technologies, including:

  • Obtain informed consent,
  • Disclose AI usage sufficiently and promptly,
  • Understand how patient data will be collected and stored safely,
  • Ensure patient data collection and storage processes adhere to HIPAA requirements,
  • Complete a risk assessment weighing the benefits of use against potential harms,
  • Develop competence with AI technology among behavioral health practitioners,
  • Consider each patient and their unique needs when deciding whether to incorporate AI into their care,
  • Establish continuous monitoring and reassessment processes, and
  • Conduct human-led critical evaluation of any AI-generated text or diagnostic and treatment protocols.

Additional Resources:

Other research into AI within behavioral healthcare includes:

References

  1. Afshar, M., Resnik, F., Joyce, C., Oguss, M., Dligach, D., Burnside, E. S., Sullivan, A. G., Churpek, M. M., Patterson, B. W., Salisbury-Afshar, E., Liao, F. J., Goswami, C., Brown, R., & Mundt, M. P. (2025). Clinical implementation of AI-based screening for risk for opioid use disorder in hospitalized adults. Nature Medicine, 31(6), 1863–1872. https://doi.org/10.1038/s41591-025-03603-z 
  2. Collaborative Family Healthcare Association. (2025, February 20). PCBH SIG Meeting: Artificial Intelligence in PCBH [Meeting]. Primary Care Behavioral Health Special Interest Group Meeting, Chapel Hill, NC. https://www.youtube.com/watch?v=bghdxUHj4Rk 
  3. Cruz-Gonzalez, P., He, A. W.-J., Lam, E. P., Ng, I. M. C., Li, M. W., Hou, R., Chan, J. N.-M., Sahni, Y., Vinas Guasch, N., Miller, T., Lau, B. W.-M., & Sánchez Vidaña, D. I. (2025). Artificial intelligence in mental health care: A systematic review of diagnosis, monitoring, and intervention applications. Psychological Medicine, 55, e18. Cambridge Core. https://doi.org/10.1017/S0033291724003295 
  4. Golden, G., Popescu, C., Israel, S., Perlman, K., Armstrong, C., Fratila, R., Tanguay-Sela, M., & Benrimoh, D. (2024). Applying artificial intelligence to clinical decision support in mental health: What have we learned? Health Policy and Technology, 13(2), 100844. https://doi.org/10.1016/j.hlpt.2024.100844 
  5. National Council for Mental Wellbeing. (2025, October 23). AI in Action: Community Behavioral Health Providers Share Lessons Learned [Webinar]. https://www.thenationalcouncil.org/event/ai-in-action-community-behavioral-health/ 
  6. National Institute on Drug Abuse. (2025, April 3). AI screening for opioid use disorder associated with fewer hospital readmissions | National Institute on Drug Abuse (NIDA). https://nida.nih.gov/news-events/news-releases/2025/04/ai-screening-for-opioid-use-disorder-associated-with-fewer-hospital-readmissions 
  7. Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., & Eberhardt, J. (2024). Enhancing mental health with Artificial Intelligence: Current trends and future prospects. Journal of Medicine, Surgery, and Public Health, 3, 100099. https://doi.org/10.1016/j.glmedi.2024.100099 
  8. Ryan, K., Yang, H.-J., Kim, B., & Kim, J. P. (2025). Assessing the impact of AI on physician decision-making for mental health treatment in primary care. Npj Mental Health Research, 4(1), 16. https://doi.org/10.1038/s44184-025-00124-y 
  9. Schwarze, A., & Boyd, Z. (2025). Best Practices for the Use of Artificial Intelligence by Mental Health Therapists [Guidance Letter]. Utah Department of Commerce. https://ai.utah.gov/wp-content/uploads/Best-Practices-Mental-Health-Therapists.pdf 
  10. Vydiswaran, V. G. V., Strayhorn, A., Weber, K., Stevens, H., Mellinger, J., Winder, G. S., & Fernandez, A. C. (2024). Automated‐detection of risky alcohol use prior to surgery using natural language processing. Alcohol: Clinical and Experimental Research, 48(1), 153–163. https://doi.org/10.1111/acer.15222 
  11. Wickersham, A., Barack, T., Cross, L., & Downs, J. (2022). Computerized Cognitive Behavioral Therapy for Treatment of Depression and Anxiety in Adolescents: Systematic Review and Meta-analysis. Journal of Medical Internet Research, 24(4), e29842. https://doi.org/10.2196/29842 
  12. Williamson, S. M., & Prybutok, V. (2024). Balancing Privacy and Progress: A Review of Privacy Challenges, Systemic Oversight, and Patient Perceptions in AI-Driven Healthcare. Applied Sciences, 14(2), 675. https://doi.org/10.3390/app14020675