Inteligența artificială și noile perspective în abordarea clinico-terapeutică a pacienților psihiatrici
AI and the new perspectives in the clinical-therapeutic approach to psychiatric patients
Data primire articol: 20 Iunie 2025
Data acceptare articol: 29 Iulie 2025
Editorial Group: MEDICHUB MEDIA
10.26416/Psih.82.3.2025.11006
Descarcă pdf
Abstract
Artificial intelligence (AI) is one of the most innovative technologies of our time, with a wide range of applications in the medical field. In psychiatry – where subjective experience and the complexity of human factors play a central role – AI opens new avenues for optimizing diagnosis, personalizing treatment and continuously monitoring patients. This presentation aims to highlight the ways in which artificial intelligence can influence psychiatric practice, examining both the clinical and research-based benefits, while also addressing the limitations, risks and ethical responsibilities associated with the use of these emerging technologies. We emphasize the importance of integrating AI tools into psychiatric care, from early diagnosis and individualized treatment plans to relapse prevention and supporting patients in remission. Moreover, we explore how concepts such as the “metaverse” and virtual reality may contribute to the therapeutic management of various psychiatric conditions. The discussion includes the potential impact of these technologies on disorders such as major depression, generalized anxiety disorder, post-traumatic stress disorder (PTSD), schizophrenia, autism spectrum disorder and neurodegenerative diseases. Additionally, we reflect on patients’ perceptions regarding the use of AI in improving diagnostic accuracy, therapeutic outcomes and the maintenance of remission. Looking ahead, further research is needed to assess the reliability and effectiveness of AI applications in psychiatric care. Crucially, as these technologies evolve, preserving the therapeutic doctor-patient relationship remains a fundamental pillar in ensuring the success and humanity of mental health interventions.
Keywords
artificial intelligencetechnologiesdepressionanxietyparanoid schizophreniametaverseassistanceethical implicationsdigital psychiatrychatbotsRezumat
Inteligența artificială (IA) reprezintă una dintre cele mai inovatoare tehnologii contemporane, cu o gamă vastă de aplicații în domeniul medical. În psihiatrie, un domeniu în care componenta subiectivă și complexitatea factorilor umani joacă un rol esențial, IA oferă noi oportunități pentru optimizarea diagnosticului, personalizarea tratamentelor și monitorizarea continuă a pacienților. Această prezentare își propune să evidențieze modul în care inteligența artificială poate afecta practica psihiatrică, să analizeze beneficiile observate în cercetare și în clinică, dar și să evidențieze aspecte legate de limitări, riscuri și responsabilitățile etice asociate cu utilizarea acestor tehnologii emergente. De asemenea, în această prezentare ne propunem să subliniem importanța includerii noilor tehnologii de inteligență artificială în practica psihiatrică, de la diagnostic precoce și tratament personalizat până la suportul pacienților aflați în perioada de remisiune și îndemnarea persoanelor vulnerabile să apeleze la ajutor specializat. Se vor defini termeni precum „metavers”, realitate virtuală și modul în care acestea ar putea fi benefice în tratamentul diverselor afecțiuni psihiatrice. Vom detalia impactul pe care l-ar putea avea noile tehnologii în afecțiuni precum depresia, anxietatea generalizată, sindromul de stres posttraumatic, schizofrenie, tulburarea de spectru autist, boli neurodegenerative, dar și opinia pacienților despre utilizarea acestor tehnologii în vederea îmbunătățirii procesului de diagnostic, tratament și menținere a remisiunii. În viitor, vor fi necesare mai multe studii în vederea cuantificării gradului de încredere și eficienței pe care noile tehnologii le dețin privind aplicabilitatea lor în rândul pacienților cu afecțiuni psihice. De asemenea, o parte esențială a acestor inovații tehnologice în rapidă dezvoltare este păstrarea relației medic-pacient, indispensabilă pentru desfășurarea optimă și armonioasă a procesului terapeutic.
Cuvinte Cheie
inteligență artificialătehnologiidepresieanxietateschizofrenie paranoidămetaversasistențăimplicații eticepsihiatrie digitalăchatbotsIntroduction
Mental disorders represent one of the greatest challenges in global healthcare, affecting millions of people and generating significant costs for both medical systems and society at large. In recent years, advances in artificial intelligence (AI) have opened new opportunities in psychiatry: algorithms capable of processing large volumes of clinical, genetic and behavioral data – or data extracted from textual conversations –, contributing to improved diagnostic accuracy, individualized treatments and continuous patient support.
However, integrating AI into psychiatric practice brings significant challenges. On the one hand, digital technologies can facilitate rapid access to mental health services, automate routine tasks and support clinical decision-making by providing objective information.
On the other hand, their use raises important questions regarding data privacy, accountability for decisions and preserving the human therapeutic relationship.
Therapeutic arguments for using AI
Clearer diagnoses and tailored treatments
AI can rapidly and comprehensively analyze medical and clinical information, correlating a patient’s history with laboratory results and relevant genetic data. In this way, clinicians can receive individual recommendations on therapeutic options with the highest likelihood of success. For instance, a multi-center university hospital study demonstrated that a machine-learning algorithm, combining clinical and genetic data, identified optimal treatments for psychotic disorder patients with accuracy surpassing traditional methods(1).
Digital support for therapy
Chatbots grounded in cognitive-behavioral therapy principles allow patients to access guidance and relaxation exercises via phone or computer at any time. These virtual assistants can provide emotional support through guided conversations and encourage reframing of negative thoughts with personalized messages. Simultaneously, virtual reality (VR) applications create safe, controlled environments in which patients can be gradually exposed to anxiety-triggering situations under therapist supervision, without real-world risk(2,3).
Continuous mood monitoring
Smart devices – such as smartphones and advanced wearables – track physical activity, sleep patterns and phone-use behaviors. By analyzing these data, apps can detect subtle mood deviations before the user is even aware, facilitating early interventions and treatment adjustments prior to crisis onset(4,5). It was demonstrated that passive sensors can predict mood shifts before manic or depressive episodes(5). Such passive-data methods offer real prevention opportunities, reducing hospitalizations and improving patients’ quality of life.
Increased access and time savings
Online consultations and automated assessment platforms enable patients in remote areas or with mobility challenges to consult specialists without traveling. Concurrently, clinicians can use digital forms and autogenerated reports to streamline initial evaluations, allowing more focus on relational and therapeutic aspects of visits(6,7).
Current practical tools in use
- Woebot offers daily conversational check-ins where patients express emotions and receive mental-exercise suggestions, proving effective in reducing sadness symptoms(8).
- Wysa personalizes support messages based on user responses, creating dialogues tailored to individual needs(9).
- MindLAMP collects movement, sleep and phone-interaction data, delivering an integrated mental-health report(10).
- BiAffect analyzes typing patterns on touchscreen keyboards to detect mood changes, aiding in preventing depressive or manic episodes(11).
- LAMP is a comprehensive platform where patients log mood, perform cognitive exercises and receive personalized system recommendations(10).
Arguments against using AI
Bias and equity
AI models can amplify existing inequalities when trained on datasets that do not reflect population diversity. Many datasets derive from university centers in developed countries and predominantly include young, widely spoken-language speakers, reducing accuracy for underrepresented groups and risking diagnostic or treatment errors in diverse contexts(12).
Lack of transparency and clinician trust
“Black box” algorithms processing hundreds of variables automatically provide no clear explanations for their conclusions. Without interpretability mechanisms – such as explainable AI (XAI) techniques –, many clinicians hesitate to rely on AI suggestions for major decisions. Studies indicate that most applications do not summarize the reasoning behind their recommendations(13).
Confidentiality and security
Continuous monitoring of sleep, activity, or text conversations carries significant patient-privacy risks. Even anonymized data can be reidentified, particularly from social networks. Security breaches may expose sensitive medical information, gravely impacting users’ privacy(14).
Changes in clinical workflow and acceptability
Integrating digital tools into practice demands training and adaptation, consuming clinicians’ time. Many specialists report needing extra hours for training and modifying consultation workflows, potentially delaying diagnosis and treatment initiation. There are concerns that technology may diminish the therapeutic relationship built on empathy, clinical intuition and human experience(14).
Legal liability
No clear legal framework currently defines liability for AI-generated errors. In cases of misdiagnosis or inappropriate treatment, the responsibility could fall on the software developer, the clinician, or the institution; this uncertainty deters investment and broad adoption of AI technologies(13,14).
Costs and infrastructure
Implementing advanced technologies – such as VR setups or complex data-analysis systems – requires expensive equipment and licenses. A virtual reality therapy suite can cost tens of thousands of euros, prohibitive for small clinics or resource-limited regions. Dependence on high-speed internet and powerful servers may exclude rural patients or those with poor connections.
Applicability of artificial intelligence in depression and anxiety
During the COVID-19 pandemic, AI chatbots were developed to support mental health, playing a significant role in treating mental disorders, especially depression. These chatbots can assess depression levels, recommend self-help strategies and manage symptom-and-treatment databases. They also contribute to depression diagnosis by asking mood and stress-level questions(15).
Wearable electronics with sensors now provide real-time information on health, activity and environment. Integrating these devices with AI has become a major advance in diagnosing and managing depression and anxiety. Wearables such as smart bracelets monitor parameters – physical activity, heart rate, sleep quality, temperature and blood oxygen – that correlate with depressive states. Studies show that increased physical activity and better sleep reduce depression and anxiety symptoms, whereas low activity and poor sleep exacerbate them. Elevated body temperature may also be associated with depression. After antidepressant treatment, wearables record improved the parameters(15).
Applicability of artificial intelligence in paranoid schizophrenia
In schizophrenia research, studies report that AI and machine-learning (ML) algorithms can analyze various disease components – prediction, prevention-method evaluation, and even treatment response. Techniques such as structural MRI, functional MRI, PET-CT and EEG scans are processed with machine-learning. ML automates detection of data-pattern changes via trained algorithms. In schizophrenia patients, machine-learning has revealed brain changes relative to healthy controls, in both white and grey matter – particularly cortical white-matter thinning on structural MRI. Treatment efficacy with antipsychotics is lower in patients showing reduced white-matter volume(16).
ML has also analyzed PET-CT comparative images of patients with schizophrenia, observing increased radiotracer uptake in neuroinflammation zones (microglial activation), a process closely linked to schizophrenia(16).
The metaverse’s role in psychiatry
The metaverse – immersive, interoperable virtual environments where users interact via avatars – promises new therapeutic and psychological-support modalities:
- VR exposure for trauma – patients gradually face triggering scenarios (e.g., enclosed spaces or traumatic memories) in VR under therapist guidance, facilitating processing in a safe setting.
- Avatar-based assistants – platforms such as Replika or Woebot could integrate into the metaverse to offer adaptive, interactive sessions combining language, gestures, and sensory stimuli.
- Virtual group therapy – support groups can meet in 3D spaces where participants feel present together, without physical barriers or stigma.
It is highlighted that these environments have the potential to boost patient engagement and customize therapeutic scenarios, but with caution about the risks of dependency and negative impacts on traditional social bonds(17).
Current and future limitations
Standardization of data collection and quality
Studies currently use highly heterogeneous data sources, from brain imaging and EEG to motion sensors and digital journaling. The lack of well-established protocols hinders cross-center comparisons and reproducibility. Progress requires clear standards for collecting, pre-processing and labelling clinical and behavioral data(12,13).
Interoperability and integration into electronic health records
AI systems often operate in isolation, disconnected from patients’ electronic health records, preventing quick access to complete histories and forcing clinicians to gather information manually. Future digital-health platforms must communicate with each other and with the existing clinical applications to ensure seamless information flow.
Regulations and ethical frameworks adapted to technology
While general data-protection guidelines like GDPR exist, we lack AI-specific regulations for psychiatry applications. We need audit standards, explain ability requirements and risk-assessment criteria defining responsibilities of developers, medical institutions and specialists in case of errors or breaches(13,14).
Professional training and organizational culture change
Introducing AI into psychiatric clinics demands upgrading clinicians’ digital competencies. Training must cover tool use and report interpretation. Simultaneously, an organizational culture open to innovation is necessary, encouraging interdisciplinary collaboration among psychiatrists, computer scientists and ethicists(13,14).
Recommendations for implementation
- Develop and follow standardized protocols for collecting, processing and labelling medical and behavioral data to facilitate study comparison and reproducibility.
- Integrate AI solutions into patients’ electronic health records, so clinicians have a complete, real-time view.
- Adopt dedicated ethical and regulatory frameworks for AI in psychiatry, including explainability requirements, periodic algorithm audits and clear error-liability rules.
- Invest in ongoing training of medical staff, covering both digital tool use and interpreting automated recommendations while maintaining empathetic patient communication.
- Design financial sustainability and reimbursement models via public-private partnerships and funding schemes that acknowledge AI technologies’ value.
Future research directions
- Conduct multicenter studies with diverse samples to test algorithm performance across sociocultural contexts.
- Undertake longitudinal investigations to assess long-term effects of digital interventions and VR-assisted therapy.
- Research explainability methods and active patient involvement in AI-assisted decision-making.
Impact on health policy
For healthcare system decision-makers:
- Incorporate AI into national mental health strategies, promoting innovation through grants and clear regulations.
- Monitor and evaluate large-scale implementations continuously to refine policies and best practice guidelines.
By applying these recommendations, we can leverage AI’s benefits while minimizing risks, ensuring psychiatric care that is more equitable, efficient and patient-centered.
Discussion
In this section, we explore emerging directions and best practices identified in the literature, focusing on:
- Integrating NLP with machine learning
- Examples of clinical and regulatory best practices.
Integrating NLP with machine learning
Studies combining natural-language processing (NLP) with ML demonstrate how linguistic features can become valuable biomarkers.
- Psychosis prediction: automated analysis of speech coherence and syntactic structure from free interviews predicted psychosis onset with 83% accuracy within two years(18).
- Online suicide risk detection: neural networks trained on support forum posts identified suicidal intent with more accuracy(19).
- NLP presents opportunities to enhance depression detection(20).
These examples show how NLP structures qualitative data and, together with ML, provides diagnostic and prognostic tools that complement traditional methods.
Examples of best practices
- Use AI as an adjunct – not a replacement – for clinical judgment. Ensure that the model’s outputs are explainable to the healthcare team.
- AI-based tools can support clinicians by analyzing patient history, vital signs and text data to suggest potential diagnoses or treatment options.
Conclusions
We have shown how AI can add real value to psychiatry by supporting diagnosis, providing digital interventions and offering continuous mental state monitoring. At the same time, we identified major risks: data bias, algorithmic opacity, privacy vulnerabilities, clinical-integration challenges and high costs.
By addressing these topics, we emphasize that the future of digital psychiatry will not be dictated solely by technology but by how we build bridges between algorithms, ethical frameworks and human experience.
Corresponding author: Ana-Maria Popîrda E-mail: popirdaana@gmail.com
Conflict of interest: none declared.
Financial support: none declared.
This work is permanently accessible online free of charge and published under the CC-BY licence.
Bibliografie
-
Tay JL, Htun KK, Sim K. Prediction of Clinical Outcomes in Psychotic Disorders Using Artificial Intelligence Methods: A Scoping Review. Brain Sci. 2024;14(9):878.
-
Kothgassner OD, Goreis A, Kafka JX, Van Eickels RL, Plener PL, Felnhofer A. Virtual reality exposure therapy for posttraumatic stress disorder (PTSD): a meta-analysis. Eur J Psychotraumatol. 2019;10(1):1654782.
-
Im CH, Woo M. AI-Powered CBT Chatbots for Depression and Anxiety: A Review of Clinical Efficacy, Therapeutic Mechanisms, and Implementation Features (Preprint). 2025; 10.2196/preprints.78340.
-
Lautman Z, Lev-Ari S. The Use of Smart Devices for Mental Health Diagnosis and Care. J Clin Med. 2022;11(18):5359.
-
Ortiz A, Maslej MM, Husain MI, Daskalakis ZJ, Mulsant BH. Apps and gaps in bipolar disorder: A systematic review on electronic monitoring for episode prediction. J Affect Disord. 2021;295:1190-1200.
-
Duncan C, Serafica R, Williams D, Kuron M, Rogne A. Telepsychiatry during the COVID-19 pandemic. Nurse Pract. 2020;45(12):6-9.
-
Walthall H, Schutz S, Snowball J, Vagner R, Fernandez N, Bartram E. Patients’ and clinicians’ experiences of remote consultation? A narrative synthesis. J Adv Nurs. 2022;78(7):1954-1967.
-
Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health. 2017;4(2):e19.
-
Inkster B, Sarda S, Subramanian V. An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. JMIR Mhealth Uhealth. 2018;6(11):e12106.
-
Vaidyam A, Halamka J, Torous J. Enabling Research and Clinical Use of Patient-Generated Health Data (the mindLAMP Platform): Digital Phenotyping Study. JMIR Mhealth Uhealth. 2022;10(1):e30557.
-
Zulueta J, Piscitello A, Rasic M, et al. Predicting Mood Disturbance Severity with Mobile Phone Keystroke Metadata: A BiAffect Digital Phenotyping Study. J Med Internet Res. 2018;20(7):e241.
-
Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions – A Narrative Review for a Comprehensive Insight. Risk Manag Healthc Policy. 2024;17:1339-1348.
-
Amann J, Blasimme A, Vayena E, Frey D, Madai VI; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20(1):310.
-
Farhud DD, Zokaei S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran J Public Health. 2021;50(11):i-v.
-
Zafar F, Fakhare Alam L, Vivas RR, et al. The Role of Artificial Intelligence in Identifying Depression and Anxiety: A Comprehensive Literature Review. Cureus. 2024;16(3):e56472.
-
Lai JW, Ang CKE, Acharya UR, Cheong KH. Schizophrenia: A Survey of Artificial Intelligence Techniques Applied to Detection and Classification. Int J Environ Res Public Health. 2021;18(11):6099.
-
Cerasa A, Gaggioli A, Pioggia G, Riva G. Metaverse in Mental Health: The Beginning of a Long History. Curr Psychiatry Rep. 2024;26(6):294-303.
-
Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schizophr. 2015;1:15030. Published 2015 Aug 26. doi: 10.1038/npjschz.2015.30.
-
Arowosegbe A, Oyelade T. Application of Natural Language Processing (NLP) in Detecting and Preventing Suicide Ideation: A Systematic Review. Int J Environ Res Public Health. 2023;20(2):1514.
-
Teferra BG, Rueda A, Pang H, et al. Screening for Depression Using Natural Language Processing: Literature Review. Interact J Med Res. 2024;13:e55067.
