TY - JOUR AU - X. Dai AU - L. L. Leng AU - Y. Liu AU - Y. T. Huang AU - D. F. K. Wong A1 - AB - BACKGROUND: The rapid advancement of Large Language Models has sparked heated debate over whether Generative Artificial Intelligence (AI) chatbots can serve as “digital therapists” capable of providing therapeutic support. While much of this discussion focuses on AI’s lack of agency, understood as the absence of mental states, consciousness, autonomy, and intentionality, empirical research on users’ real-world experiences remains limited. OBJECTIVE: This study explores how individuals with mental distress experience support from both generative AI chatbots and human psychotherapy in natural and unguided contexts, with a focus on how perceptions of agency shape therapeutic experiences. By drawing on participants’ dual exposure, the study seeks to contribute to the ongoing debate about “AI therapists” by clarifying the role of agency in therapeutic change. METHODS: Sixteen adults who had sought mental health support from both human therapists and ChatGPT participated in semi-structured interviews, during which they shared and compared their experiences with each type of interaction. Transcripts were analyzed using reflexive thematic analysis. RESULTS: Three themes captured participants’ perceptions of ChatGPT relative to human therapists: (1) encouraging open and authentic self-disclosure but limiting deep exploration; (2) the myth of relationship: caring, acceptance, and understanding; (3) fostering therapeutic change: the promise and pitfalls of data-driven solutions. We propose a conceptual model that illustrates how differences in agency status between AI chatbots and human therapists shape the distinct ways they support individuals with mental distress, with agency functioning as both a strength and a limitation for therapeutic engagement. CONCLUSION: Given that agency functions as a double-edged sword in therapeutic interactions, future mental health services should consider integrated care models that combine the non-agential advantages of AI chatbots with the agentic qualities of human therapists. Rather than anthropomorphizing AI chatbots, their non-agential features—such as responsiveness, absence of intentions, objectivity, and disembodiment—should be strategically leveraged to complement specific functions in human-delivered psychotherapy. At the same time, practitioners should maximize the benefits of their agentic qualities while remaining cautious of the risks. The findings should be interpreted with caution as the sample consisted mainly of young, well-educated Chinese participants from a collectivist cultural context, which may limit transferability to other populations, particularly those from individualistic cultures with different mental health literacy levels, stigma patterns, and therapeutic norms. CLINICAL TRIAL NUMBER: Not applicable. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12888-025-07671-w. AD - Department of Social Work, Hong Kong Baptist University, 15 Baptist University Road, Kowloon Tong, Hong Kong.; Department of Sociology, Zhejiang University, 66 Yuhangtang Road, Hangzhou, Zhejiang, China. linglileng@zju.edu.cn.; Department of Social Work and Social Administration, The University of Hong Kong, Pokfulam, Hong Kong. AN - 41382092 BT - BMC Psychiatry C5 - HIT & Telehealth CP - 1 DA - Dec 12 DO - 10.1186/s12888-025-07671-w DP - NLM ET - 20251212 IS - 1 JF - BMC Psychiatry LA - eng N2 - BACKGROUND: The rapid advancement of Large Language Models has sparked heated debate over whether Generative Artificial Intelligence (AI) chatbots can serve as “digital therapists” capable of providing therapeutic support. While much of this discussion focuses on AI’s lack of agency, understood as the absence of mental states, consciousness, autonomy, and intentionality, empirical research on users’ real-world experiences remains limited. OBJECTIVE: This study explores how individuals with mental distress experience support from both generative AI chatbots and human psychotherapy in natural and unguided contexts, with a focus on how perceptions of agency shape therapeutic experiences. By drawing on participants’ dual exposure, the study seeks to contribute to the ongoing debate about “AI therapists” by clarifying the role of agency in therapeutic change. METHODS: Sixteen adults who had sought mental health support from both human therapists and ChatGPT participated in semi-structured interviews, during which they shared and compared their experiences with each type of interaction. Transcripts were analyzed using reflexive thematic analysis. RESULTS: Three themes captured participants’ perceptions of ChatGPT relative to human therapists: (1) encouraging open and authentic self-disclosure but limiting deep exploration; (2) the myth of relationship: caring, acceptance, and understanding; (3) fostering therapeutic change: the promise and pitfalls of data-driven solutions. We propose a conceptual model that illustrates how differences in agency status between AI chatbots and human therapists shape the distinct ways they support individuals with mental distress, with agency functioning as both a strength and a limitation for therapeutic engagement. CONCLUSION: Given that agency functions as a double-edged sword in therapeutic interactions, future mental health services should consider integrated care models that combine the non-agential advantages of AI chatbots with the agentic qualities of human therapists. Rather than anthropomorphizing AI chatbots, their non-agential features—such as responsiveness, absence of intentions, objectivity, and disembodiment—should be strategically leveraged to complement specific functions in human-delivered psychotherapy. At the same time, practitioners should maximize the benefits of their agentic qualities while remaining cautious of the risks. The findings should be interpreted with caution as the sample consisted mainly of young, well-educated Chinese participants from a collectivist cultural context, which may limit transferability to other populations, particularly those from individualistic cultures with different mental health literacy levels, stigma patterns, and therapeutic norms. CLINICAL TRIAL NUMBER: Not applicable. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12888-025-07671-w. PY - 2025 SN - 1471-244x SP - 49 ST - The paradox of agency in psychotherapy: How people with mental distress experience support from generative AI chatbots and human therapists T1 - The paradox of agency in psychotherapy: How people with mental distress experience support from generative AI chatbots and human therapists T2 - BMC Psychiatry TI - The paradox of agency in psychotherapy: How people with mental distress experience support from generative AI chatbots and human therapists U1 - HIT & Telehealth U3 - 10.1186/s12888-025-07671-w VL - 26 VO - 1471-244x Y1 - 2025 ER -