RESEARCH

Opportunities and vulnerabilities arising from the introduction of AI technologies in mental healthcare

Oportunități și vulnerabilități apărute prin introducerea tehnologiilor de IA în sfera îngrijirii sănătății mintale

Data publicării: 16 Aprilie 2025
Data primire articol: 03 Februarie 2025
Data acceptare articol: 24 Februarie 2025
Editorial Group: MEDICHUB MEDIA
10.26416/Psih.80.1.2025.10726

Abstract

Integrating artificial intelligence (AI) into psychiatry raises significant ethical, legal and social challenges. Although AI has the potential to improve assessments and predictions in mental health, its use depends on the quality and representativeness of the data, with the risk of algorithmic bias and exacerbation of social inequalities. In forensic psychiatry, AI predictions of deviant behavior require a rigorous ethical and legal framework. Data privacy is a major concern as AI handles sensitive information, and regulations may not keep pace with technological developments. In addition, reliance on AI could affect the direct doctor-patient interaction that is essential in medical and especially psychiatric treatment. The lack of standardization and adequate education for mental health professionals may create barriers to implementing AI technologies. Responsible integration requires a collaborative effort between researchers, clinicians and policy makers, ensuring the use of artificial intelligence for the benefit of patients and maintaining the integrity of the psychiatric care system. Public and professional awareness can lead to a process of integration that ultimately results in increased well-being of individuals and communities.



Keywords
artificial intelligence (AI)mental healthdoctor-patient relationshipmedical educationstigmavulnerabilitymedical sociology

Rezumat

Integrarea inteligenței artificiale (IA) în psihiatrie ridică provocări etice, juridice și sociale semnificative. Deși IA are potențialul de a îmbunătăți evaluările și predicțiile în sănătatea mintală, utilizarea sa depinde de calitatea și reprezentativitatea datelor, existând riscul de părtinire algoritmică și de accentuare a inegalităților sociale. În psihiatria medico-legală, predicțiile IA privind comportamentele deviante necesită un cadru etic și juridic riguros. Confidențialitatea datelor este o preocupare majoră, întrucât inteligența artificială gestionează informații sensibile, iar reglementările în materie pot să nu țină pasul cu evoluția tehnologică. În plus, dependența de IA ar putea afecta interacțiunea directă medic-pacient, esențială în tratamentul medical, în special în cel psihiatric. Lipsa de standardizare și de educație adecvată în cazul profesioniștilor din sănătatea mintală poate crea bariere în implementarea tehnologiilor IA. Pentru o integrare responsabilă, este necesar un efort colaborativ între cercetători, clinicieni și decidenți politici, asigurând utilizarea inteligenței artificiale în beneficiul pacienților și menținerea integrității sistemului de îngrijire psihiatrică. O conștientizare publică și în cadrul corpului profesional poate conduce la un proces de integrare care să aibă ca rezultat final o creștere a bunăstării indivizilor și comunităților.

Cuvinte Cheie
inteligență artificială (IA)sănătate mintalărelație medic-pacienteducație medicalăstigmatizarevulnerabilitatesociologie medicală

Introduction – general framework

The integration of artificial intelligence (AI) technologies in psychiatry presents a myriad of challenges that need to be addressed to ensure ethical and effective implementation. It is becoming increasingly clear that the central concern will be the ethical implications around the role of AI in mental health assessments and interventions. AI systems – particularly those using machine learning (ML) – can provide objective assessments and predictions of mental health conditions. The objectivity of these information systems will need to be understood in terms of the type, accuracy and amount of data they use; datasets, at least the initial ones, can be viewed as a library dependent on the creation of human users, but how AI systems will create and use secondary datasets remains an area of uncertainty. The way in which the system of social control of deviant behavior is constructed, including formal social control through legal norms, leads to significant ethical dilemmas, especially when AI technology predicts behaviors such as violence or recidivism in forensic psychiatry. The social and legal ramifications of such predictions require careful consideration of the ethical frameworks guiding their use(1). In addition, reliance on artificial intelligence for diagnostic purposes may inadvertently lead to biases if the underlying data are not representative of diverse populations, potentially exacerbating existing disparities in mental health care(2,3). It is expected that a range of social inequalities reflected in inequitable social stratification for certain categories of patients will carry over to the characteristics of patient data to be included in the algorithmic processing of various artificial intelligence tools. Without a careful analysis of how AI technologies are used in psychiatry, it is possible that the discrepancies between certain categories of patients could be exacerbated. A general vulnerability can be understood on the basis of reduced vigilance on the part of the human end-user (the mental health professional) or on the part of future types of professionals who will oversee the use of data in automated systems. The potential for bias in AI algorithms poses a significant risk, as these systems may inadvertently perpetuate the existing disparities in mental healthcare if not carefully designed and monitored(3).

Medicine, in general, operates using a wide range of data, much of it in the sphere of personal privacy; from this perspective, the legal architecture regards it as special data. From a wide range of considerations, psychiatry can be seen as being on a higher level of vulnerability in the collection and use of sensitive personal data; the use of artificial intelligence will change this processing through rapid data absorption and storage in a vast virtual environment and subsequent management that is hard to imagine. It becomes natural to have concerns about patient privacy and how to obtain and extend the scope of informed consent. Another critical challenge that looms is data privacy and security issues. Because AI systems often require access to large datasets to operate effectively, the risk of data breach or misuse becomes a pressing concern(2,4). As AI systems become more integrated into mental health services, ensuring robust data safeguards is essential to maintain patient trust and comply with legal standards(2,5). In addition, the rapid pace of technological advancement in artificial intelligence requires continued scrutiny and regulation to protect against misuse and ensure that AI applications are developed and deployed with patient well-being as a priority(2,6). The current processes of lawmaking and the processes of evaluating the effectiveness of legal norms appear to lag far behind, at least in terms of the length of time required to be included in the current social regulatory framework; it is becoming increasingly urgent to involve a wide range of professionals from the broad social sciences (from experts in ethics and philosophy of science to medical sociologists and social workers) to participate and modulate the implementation of new artificial intelligence technologies. Ensuring that AI applications comply with ethical standards and legal regulations is essential to maintain patient trust and uphold the integrity of mental healthcare(6,7).

The current intrinsic characteristics of AI technologies stem from their evolving nature which leads to the need for continuous research and development to address new challenges. The field of psychiatry needs to engage in an ongoing dialog about the implications of artificial intelligence, not only the potential benefits but also the difficult to estimate downsides with interconnected consequences. Collaborative efforts between researchers, clinicians and policy makers are essential to ensure that AI is harnessed responsibly and effectively in mental health care(3,8). This collaborative approach can help to identify best practices for integrating artificial intelligence into psychiatric settings, become inclusive of new types of professionals or new kinds of interventions, while addressing emerging ethical and practical challenges.

If we try to focus on the practical aspects, those that concern the direct interaction with patients, one of the most important challenges is the use of artificial intelligence in clinical settings; the ethical implications are closely related to the real ones of the social sub-system in which the medical act is performed. Ethical dilemmas stem from the potential for AI to replace direct human interaction; the inter-human doctor-patient relational complex is at the center of any form of medical intervention. More than in any other specialty, direct interaction with health professionals is crucial in psychiatric care. The more psychiatric interventions rely on algorithms for diagnosis and treatment, the more the therapeutic alliance between patients and practitioners, a cornerstone of effective mental health treatment, may be undermined(2,9).

One cannot lose sight of the potential for artificial intelligence to misinterpret or misdiagnose mental health conditions, which presents another level of complexity. The nuances of psychiatric disorders often require a deep understanding of human emotions and social contexts, which AI may not fully comprehend. These considerations point us in a direction in which mental healthcare cannot be divorced from direct human interaction. It is clear that artificial intelligence can analyze vast datasets to identify patterns; at the same time, it is difficult to estimate the extent to which these technologies can overlook the subtleties of individual patient experiences that are vital for accurate diagnosis and treatment(10,11). Here we may be experiencing a vulnerability of new technologies, a limitation that raises concerns about the reliability of AI-based assessments and the potential consequences of misdiagnosis, which may have profound implications for patient care(12). The introduction of new assistive and even service delivery technologies in the mental health sphere is driving a need for adequate training and education for mental health professionals in AI technologies. It is only natural that disparities and segregations also exist in the field of medical education of this type, depending on the age of professionals, level of digital competences and skills, geographical location and practice environment. A large proportion of practitioners may experience a lack of familiarity with AI tools, which may hinder their ability to integrate these technologies into their practice effectively. One survey indicated that only a small percentage of clinicians felt adequately informed about AI, highlighting a significant knowledge gap that needs to be addressed through improved educational initiatives(13,14). From the professional community’s perspective, it may well prioritize the development of curricula that incorporate AI training to prepare future psychiatrists for an increasingly technology-influenced landscape(15,16). In addition to other educational gaps between various health systems, this newly emerging one may lead to a hesitancy to adopt AI tools, further complicating the integration of these technologies into global psychiatric practice.

The conceptual frameworks in which medicine has been operating in the last decades are based on scientific evidence embedded in a series of operational structures (guidelines and protocols) that have the fundamental role of standardization and uncertainty reduction. The current picture of artificial intelligence tools in psychiatry is characterized by a lack of standardized frameworks for assessing the effectiveness and reliability of AI tools. The absence of rigorous evaluation processes can lead to a “Wild West” scenario where AI applications are adopted without sufficient evidence of their efficacy or safety(17). Establishing a comprehensive framework that cultivates trust and transparency in AI applications is essential for boosting confidence among practitioners and patients alike(17). In addition, the integration of artificial intelligence into clinical practice must be accompanied by adequate training for mental health professionals to ensure that they can effectively interpret the insights generated by AI and incorporate them appropriately into patient care(9,18). The current steps that the introduction of any new form of treatment undergoes, brought together as an accepted logical framework and considered as the final form of scientific acquisition in a field, should be replicated in the introduction of AI-assisted therapies. We can think of a logical series of steps to be followed until the acceptance of a new type of intervention, starting with preliminary validation on a scientific basis, with transparent procedures towards professionals and patients, with independent monitoring and control bodies and, finally, with the adaptation of processes of continuing medical education of professionals. It seems natural that the logical framework should be preserved, but the way of analysis and implementation, and the emergence of additional stages, remain difficult to predict.

The development of AI systems of care in mental healthcare starts from a set of promises that are socially perceived as all-encompassing: improved diagnosis with greater accuracy and speed, personalized treatment, and increased accessibility. These promises risk blinding us socially and overshadowing substantial challenges that need to be carefully addressed. Addressing ethical concerns, ensuring confidentiality of data, the potential for misdiagnosis, establishing assessment frameworks, and providing training for practitioners are essential steps in harnessing the potential of AI in psychiatry; all of which remain to be put in the best interests of patients, the integrity of the mental healthcare system, and the well-being of communities and societies as a whole. As the field continues to evolve, it is imperative that the constituent parts of the social system work together to address these challenges and to ensure that artificial intelligence serves as a tool to enhance, rather than undermine, the quality of psychiatric care.

Research objectives and methods

The overall objective of this paper was to provide an overview of the possible impact of the introduction of artificial intelligence technologies in the field of psychiatry in relation to current knowledge. Starting from the theoretical foundations of qualitative (content) research, common to the social sciences, we conducted a literature search on a number of topics on which we sought to identify potential advantages and vulnerabilities arising from the introduction of new technologies: psychiatric assessment and diagnosis, mental health treatment and therapeutic interventions, potential changes from a public health perspective, new care professions in relation to AI technologies, societal changes that can be expected, and the modulation of stigmatization related to mental  illness and disorders. The search strategy consisted of querying the Scopus platform (https://www.scopus.com/) using the following combinations of terms: psychiatry, future and AI, together with the specific descriptors to the topics analyzed. In this way, we searched for papers relevant to each proposed theme; the 20 most recent publications for each theme were selected. A total number of 66 papers were selected to be qualitatively analyzed based on the formulated objectives; a number of works, considering the content, were included in several analyzed topics.

A research methodology with a higher degree of systematization may be necessary; this approach manages to outline an initial image, a starting point for further investigations through which we can better understand the way in which the new informational technologies will be integrated in the field of mental healthcare.

Results and discussion

Psychiatric assessment and diagnosis in the context of the introduction of AI technologies

The incorporation of artificial intelligence in psychiatric assessment and diagnosis appears to be very close or even in use to some extent; the application of these technologies on a larger scale will lead to significant transformation of traditional methods of assessing psychiatric patients. AI technologies, particularly those using natural language processing and those centered on machine learning, offer innovative approaches to understanding and diagnosing human mental health-related conditions. It is expected that one of the main ways in which AI is changing psychiatric assessment is by analyzing language patterns. Research already provides clues that artificial intelligence can effectively assess psychiatric constructs by analyzing key elements of language that can provide insights into an individual’s mental state that may not be captured by conventional assessment methods(19,20). For example, artificial intelligence systems can analyze social network language or other written communications to detect signs of mental health problems, possibly enabling earlier intervention(19). This is an advantage that is not without its vulnerabilities.

Moreover, AI tools, such as chatbots, appear to show promise in suicide risk assessment and general mental health assessment; all of which come through a permanency of monitoring communication channels that the individuals access. Studies are beginning to realize the potential for artificial intelligence, including models such as ChatGPT, to contribute significantly to mental health assessments by providing real-time assessments and support(6,21). However, direct human interaction seems to remain a necessity, at least in some cases or pathologies. These AI systems can rapidly process large amounts of data, providing clinicians with additional information that can complement traditional assessment methods; from this perspective, we are talking about assistance to therapeutic intervention rather than a role takeover, a somewhat reassuring situation for the conservative. Still, although AI can improve the assessment process, it is essential to recognize its limitations and the need to integrate human expertise to ensure accurate interpretations of AI-generated data(3).

The move toward AI-assisted psychiatric assessments also addresses some of the challenges associated with conventional psychiatric assessments, which often rely heavily on self-reported questionnaires and direct observations by clinicians. These traditional methods can also be subjective and may not always provide a comprehensive picture of a patient’s mental health(9,22). Artificial intelligence can help to standardize assessments and reduce uncertainties derived from human subjectivity, reduce bias, and improve diagnostic accuracy by using data-driven approaches that take into account a wider range of factors, including behavioral models and physiological data(9). Wearable technologies and mobile apps can continuously monitor patients’ behaviors and physiological responses, allowing for a more nuanced understanding of their mental health over a time frame(22).

The next step in AI technology-mediated processes is the ability of information systems to analyze complex data sets; at this stage, predictive models can be developed that can play a central role in risk stratification. The resulting predictive models may identify individuals at risk of specific mental health conditions or those prone to deviant actions; the implications for other social processes, such as stigmatization, remain to be revealed. Looking only at the benefits brought by AI technologies, these models can inform and modulate preventive strategies and personalized treatment plans, thereby increasing the overall effectiveness of psychiatric care(23,24). Integrating AI in this way not only facilitates more accurate diagnoses, but also promotes a shift towards precision psychiatry, in which treatment is tailored to the individual based on their unique data profile(25). Patient-centered medicine has remained a desideratum of modernity to be confirmed, or not, by the introduction of new technologies.

However, AI-assisted assessments are not without their challenges; the transition to these forms of psychiatric care assistance is likely to be the most difficult. Concerns about data privacy, the potential for bias in mathematical algorithms, and the need for transparency in AI decision-making processes are critical issues that need to be addressed(26,27). This new way of using AI systems may lead to a form of dependency, similar to that observed with the advent and introduction of any device that makes work easier. The further need seems to be one in the sphere of medical education that will go beyond the formal framework we are used to today. It is becoming clear that physicians need to receive adequate training to effectively interpret the insights generated by AI and to maintain the therapeutic relationship with patients(3), without which the effectiveness of any form of intervention plummets.

The integration of artificial intelligence into psychiatric assessment and diagnosis can lead to significant progress in mental healthcare. By improving the accuracy and efficiency of assessments, AI technologies can provide valuable support to clinicians, can lead to a significant shortening of the time to a more accurate diagnosis, and can ultimately lead to improved patient outcomes or even improved outcomes for certain categories of patients. Overcoming the ethical and practical challenges associated with this integration to ensure that AI serves as a complement to patient well-being will be a multifaceted challenge; traditional psychiatric practices will be amended in various ways.

Potential changes in psychiatric treatment and interventions mediated by AI technologies

The introduction of artificial intelligence technology in psychiatry is expected to significantly alter treatment and therapeutic interventions, increasing the accuracy and personalization of care. AI’s ability to analyze large datasets and identify patterns may lead to more accurate diagnoses and personalized treatment plans, addressing the complexity of mental health disorders that often require individualized approaches(28,29). In general, traditional psychiatric treatments have been limited by diagnostic uncertainties and suboptimal responses to therapies. It is envisaged that AI may facilitate a shift towards precision psychiatry, in which interventions are personalized based on a patient’s unique clinical, genetic, and lifestyle factors(30,31).

Among the most promising applications of artificial intelligence in psychiatric treatments is its potential role in predicting treatment outcomes. Machine learning algorithms can analyze historical treatment data, in broad population contexts, to predict how individual patients might respond to various interventions, thereby guiding clinicians in selecting the most effective therapeutic options(32,33). This predictive capacity not only improves treatment efficacy but also reduces the trial-and-error approach that often characterizes psychiatric care(34). Behavioral and expressed language analyses can help to identify the most appropriate psychotherapeutic modalities for patients with depression or anxiety, by analyzing their responses to previous interventions(35).

There are growing indications that AI technologies are increasingly being integrated into therapeutic interventions themselves. Chatbots and virtual therapists, powered by natural language processing, are coming to provide immediate support but also coping strategies for patients, particularly those who may have difficulty accessing traditional mental health services(6). From an immediate, utilitarian perspective, we may be looking at a dramatic increase in individuals’ accessibility to mental health services. AI-powered platforms can provide cues from the psychoeducation sphere, monitor patient progress, and provide real-time interventions, thereby increasing patient engagement and adherence to treatment plans(36). The use of artificial intelligence in this context also allows for continuous monitoring of patients’ emotional and behavioral changes, enabling timely adjustments to treatment strategies(29,37). Compared to traditional interventions, which involve traveling to a therapist at a certain interval of days, AI-assisted therapies come to promise increased adaptability and rapid response to patients’ requests and needs. The effects of the lack of direct human interaction remain to be seen.

Similar to genetic phenotyping, which differentially and specifically describes certain cell populations, AI technologies are poised to realize digital phenotyping of individuals; it remains for society and its structures to direct the process towards improving therapeutic interventions. One such process involves using data collected from smartphones and other wearable devices to assess mental health in real time(22,38). This approach can enable targeting a more dynamic understanding of a patient’s mental state, facilitating personalized interventions that can be tailored based on continuous data analysis. The constant exchange of data inherently raises issues of privacy and the initial scope of informed consent signed by the individual when initially accessing platforms or downloading an app. People’s interaction with mobile devices can provide robust cues to mood or behavioral fluctuations and, in turn, trigger specific therapeutic responses, creating a more responsive treatment environment(38). At least at first glance, the “therapist” assisted by AI technologies promises to be much more responsive to patients’ fluctuating needs.

It seems increasingly certain that the integration of AI into psychiatric treatment will raise ethical issues and concerns, there will be heated debates and major changes in many components of the mental health care system. Those that we already intuit are in the area concerning privacy and the potential for bias in AI algorithms(36,39); numerous other issues will need to be analyzed from an ethical perspective. Ensuring that AI systems are designed with ethical considerations in mind is crucial for maintaining patient trust and protecting sensitive information. Over the past decades, it has become clear that medical and research ethics have come to complement the work of a highly trained professional body. Medicine has become such a highly technologized field that we can recognize a degree of dependence of doctors on technology. This will become more pronounced with the introduction of AI technologies and, therefore, mental health professionals will need to be adequately trained in order to effectively interpret the knowledge generated by AI and to maintain the essential human element in therapeutic relationships(6,30).

It is expected that the introduction of AI technology in psychiatry will lead to transformations in treatment and therapeutic interventions, all under the promise of more precise, personalized and responsive care. Through predictive analytics, AI-powered therapeutic platforms and digital phenotyping will assist clinicians with the goal of improving treatment outcomes and increasing patient engagement. However, addressing the ethical implications and ensuring adequate training for practitioners will be vital to realizing the full potential of artificial intelligence in mental healthcare.

Possible public health developments

The perception of mental illness from the perspective of social problems derived from public health issues will increasingly be shaped by the integration of artificial intelligence technologies into mental healthcare; we can only imagine this by analogy to the way in which each stage or tool of information technology has permeated the life of every individual or institution. AI has the potential to improve early detection, improve treatment outcomes, and facilitate public health interventions, thereby transforming the way mental health problems are addressed at the population level. Health monitoring and regulatory institutions, global or national, will need to use faster and more comprehensive means of societal-level assessment.

One of the most significant contributions of artificial intelligence could come from its ability to facilitate early detection of mental health problems. By analyzing large datasets from a variety of sources, including electronic health records, people’s posts, texts and comments on social networks, and information from end-users’ wearable devices, AI can identify patterns that indicate mental health deterioration before it manifests into more severe or clinically significant conditions(2,34). For example, machine learning algorithms can process data to detect subtle changes in behavior or mood that may signal the onset of depression or anxiety, enabling timely interventions(29,40). This approach can be proactive in nature and not only aids individual patient care, but also contributes to public health by reducing the overall burden of mental health disorders in the community(41,42).

AI technologies can improve the personalization of treatment plans, which is crucial for effective mental health care through patient-centeredness. By leveraging data about individual patient characteristics, including genetic, psychological and social factors, AI can recommend personalized interventions that are more likely to be successful for specific individuals(18,34). Personalized approaches are central to all domains of care, but may be all the more important in psychiatry, where responses to treatment can vary significantly between patients; the determinants of mental illness, including social determinants, are as important as they are difficult to identify and quantify. The ability of AI to analyze complex interactions between different factors may lead to more efficient and effective treatment strategies, ultimately improving public health outcomes(10,29). Achieving tailored, unitary, rapid intervention strategies that target the largest possible number of individuals appears to be a strategy that can have an impact from a public health perspective.

The way in which artificial intelligence can make its presence felt in the social information field, and the way in which it can raise public awareness about a particular social or medical problem, may lead to a vital role for AI technologies in public health education and awareness campaigns. By using AI-based platforms, mental health organizations can disseminate information about mental health problems, promote resilience, and encourage help-seeking behaviors among populations(2,41). The issues of acceptance of an illness-related situation, concrete help-seeking actions, as well as compliance with medical interventions have posed challenges since the early emergence of medicine as a science. For example, AI can analyze community health data to identify at-risk populations and tailor educational materials to meet their specific needs, thereby encouraging a more informed public(36,40). This type of intervention not only stratifies risk within a population, but can lead to a revelation and reinforcement of the existence of social determinants of disease in the public consciousness. At the same time, this targeted approach can help reduce stigma associated with mental health problems and promote a culture of openness and support.

The next stage of any medical intervention is surveillance; AI can help monitor and evaluate the effectiveness of public health interventions targeting mental health. Increased capacity for data analysis allows us to think about monitoring beyond the individual or small group of patients (as we are used to in traditional medicine). By continuously analyzing data from various sources, AI can provide real-time feedback on the impact of specific programs or policies, allowing health authorities to adjust strategies as needed(41,42); interventions could be adapted over the monitored period, with the overall intervention plan undergoing changes from the original architecture according to the evolving dynamics of the process. This dynamic evaluation process is essential to ensure that mental health initiatives respond to the evolving needs of the community and can lead to a more efficient allocation of resources(34,43).

The advantages of using AI in mental health interventions should not blind us socially, and medical and information technology professionals should always have a number of ethical concerns, particularly with regard to data privacy and the potential for algorithmic bias(10,41). Ensuring that AI systems are designed with ethical considerations in mind is essential for maintaining public trust and protecting sensitive information; it seems a logical consequence that a body of experts in the ethics of using AI technologies in medicine, and even more so in psychiatry, needs to be trained. In addition, there is a need for interdisciplinary collaboration to develop AI-based approaches that are equitable and accessible to all segments of the population(2,41); a whole range of new professions, derived from traditional, caring or technical and engineering professions, seem to be on the verge of being born.

At the societal level, the perception of mental illness as a public health problem can be profoundly altered or merely modulated by the wide range of possible AI-assisted interventions; from how certain distress or dysfunction will be recognized or accepted to the emergence of new entities of psychiatric or social pathology, all of these assumptions are possible. Through early detection, individualized treatment, public health education, and ongoing monitoring, artificial intelligence has the potential to improve mental healthcare at both the individual and community levels. However, addressing ethical concerns and ensuring equitable access to AI technologies will be essential to maximize their benefits in public health. Medical professionals as well as the public and communities will go through a new awareness process and will need to strive for an integration that benefits individuals, while limiting possible negative influences.

A new professionalization of mental healthcare

The introduction of artificial intelligence technologies across the broad spectrum of the mental healthcare system is likely to give rise to new professions and roles in the field; the need may also arise as a consequence of the ongoing process of overspecialization in medicine. The increasing complexity of work is also being followed by the use of new technological resources, with a speed of deployment that is unlikely to be covered by traditional medical training. As artificial intelligence continues to evolve and become integrated into various aspects of mental health services, the landscape of mental healthcare will change, requiring professionals who can effectively navigate this new terrain; professions that effectively bridge the gap between medicine and the information technology-enabled elements seem to be increasingly needed.

One of the emerging roles we can envision is that of an AI specialist in mental health, who would focus on implementing and managing AI tools in clinical settings. This specialist would be responsible for ensuring that artificial intelligence applications are used ethically and effectively, bridging the gap between technology and clinical practice. It is very likely that, in addition to the role in the operational, technical sphere, these specialists would also provide a primary oversight role on how these new tools would be used in the clinical sphere. Their expertise would be crucial in interpreting the insights generated by AI and integrating them into treatment plans, thereby improving patient care(3). As AI tools become increasingly prevalent, the demand for professionals who can oversee these technologies and ensure their proper use will increase significantly

Looking at how IT platforms (including social networking platforms) intervene with information and support services, we can imagine the emergence of a new role: digital mental health coaches. Right now, there are more or less regulated “professions” providing support for a variety of mental health problems; these “coaches” could focus on providing AI-assisted interventions and support through digital platforms; how they will be integrated into the health system remains to be seen. These coaches would use AI-based applications to provide real-time assistance to patients, helping them navigate their mental health challenges while providing personalized feedback based on artificial intelligence analytics(44). This role would be particularly important in increasing access to mental healthcare, especially in underserved areas where traditional mental health services may be limited(45). The scope of responsibility of these new professions remains to be determined, as well as how to make the appropriate connection with medical services so that specialized help is not delayed.

The current literature identifies a number of ethical vulnerabilities that society’s response, through classical forms of social control, will seek to address. From this angle, another potential profession is that of an ethical consultant to the use of AI in mental health. As artificial intelligence technologies raise ethical concerns about privacy, bias through the way data are used and the potential for dehumanizing care, there will be a growing need for professionals who can address these issues in a systematic and predictable policy-generating way. These consultants could work with mental health organizations to develop ethical guidelines for AI use, ensuring that patients’ rights are protected and that AI applications are used responsibly(36). This role will be critical in fostering trust between patients and AI systems, which is essential for the successful integration of technology in mental health care. The integration of new technologies into already existing social systems of care remains closely linked to the trust of end-users and professionals and this determinant will need to be carefully tracked in order to leverage technology to achieve the well-being of every vulnerable individual.

Turning to the technical side, it is easy to imagine the need for data analysts specializing in mental health whose role may become increasingly important. These professionals will analyze data generated by AI systems to identify general trends in various populations, improve AI-generated treatment protocols, and improve overall quality of care. Their work will involve collaborating with clinicians to interpret data insights and apply them to clinical practice, thereby contributing to evidence-based decision making(46); the characteristics of the data obtained are likely to differ especially in quantity, and processing them involves knowledge and skills beyond the usual scope of the traditional medical professional. As mental healthcare becomes more data-driven, the need for skilled analysts who understand both mental health and data science will increase.

Medical education cannot be left behind and profound changes are needed; integrating AI into mental health care will require changes in educational programs for future mental health professionals. Training curricula will need to incorporate AI literacy (an initiatory step, similar to the initial digitization at the advent of the internet), equipping students with the skills to work alongside AI technologies effectively(16). This shift will not only prepare future clinicians in the use of artificial intelligence tools, but would also foster the development of interdisciplinary professionals who can navigate both the technological and therapeutic aspects of mental health care.

The emergence of AI technologies in mental health care is likely to create new professions that focus on the ethical, practical and analytic dimensions of AI integration. Roles such as AI mental health specialists, digital mental health coaches, AI ethics consultants, and data analysts will be essential to ensure that artificial intelligence enhances rather than weakens the quality of mental healthcare; the spectrum of new professions is likely to be much broader than we might imagine at the dawn of this new era, and awareness of these trends can be useful and should probably follow the entire trajectory of evolution. As the field evolves, educational institutions will also play a crucial role in preparing the next generation of mental health professionals to thrive in this new landscape and to continue to work towards social well-being.

Societal changes in the context of AI support for mental disorders

The integration of artificial intelligence technologies in mental healthcare is expected to bring significant societal changes, reshaping the way mental health disorders are perceived, treated and managed; even classifications are likely to undergo changes, future categorizations and the inclusion of nosological entities as diseases, disorders or dysfunctions are likely to undergo changes, once access to big data is to alter our current perspectives. As artificial intelligence becomes more prevalent in this field, several key transformations can be anticipated.

Increased accessibility to mental health services may emerge as one of the most profound changes. Artificial intelligence technologies can facilitate the delivery of mental healthcare through digital platforms, making services more available to people who might otherwise face barriers to access, such as geographic limitations or stigma associated with seeking help(2,47). For example, AI-powered apps can provide immediate assistance and resources, allowing users to access a wider range of mental health services. This shift may lead to a narrowing of the treatment gap, particularly in underserved populations, thereby promoting mental wellbeing more broadly(41,48). The increasing use of diverse computing platforms, the growing availability of internet access, and the widespread use of mobile devices support this assumption.

Increased accessibility – and in deeper and deeper layers of society – is likely to lead to the normalization of discussions about mental health as AI tools become more integrated into everyday life. The use of artificial intelligence in mental health, such as chatbots and mobile apps, can help destigmatize mental health issues by encouraging open conversations and providing psychoeducation(7,41). It is difficult to judge whether these developments will be natural, normal ones, or the influence of professionals will have to be a concerted one towards a use towards stigma reduction, but the way in which previous technologies have been integrated brings hope in this area. As these technologies become mainstream, societal attitudes towards mental health may change, fostering an environment in which individuals feel more comfortable discussing their mental health challenges and seeking assistance without fear of judgment(4).

The personalization of mental healthcare may represent another opportunity to improve psychiatric patient care through artificial intelligence; the effects resulting from this opportunity may focus on more effective treatment outcomes. AI systems can analyze individual data to tailor interventions that fit specific needs, preferences and circumstances(49,50). This level of personalization that arises from a rapid analysis of rich data can lead to a higher level of patient engagement while increasing patient adherence to treatment plans, ultimately improving mental health outcomes. Consequently, society may be witnessing a shift towards a more proactive approach to mental healthcare, in which individuals are empowered to take control of their mental health through personalized strategies(14). This personalization model of care can be conceptualized in the area of patient-centeredness of care.

Building on the justified anticipation of the emergence of new care professions and new roles in the mental health sector, as AI technologies evolve, there will be a growing need for professionals who can manage, interpret and integrate these tools into clinical practice(13,51). Roles such as mental health AI specialists, digital mental health coaches and AI ethics consultants may become essential, reflecting a shift in the skill sets required by the mental health workforce(10,52). This development will not only bring the hope of increasing the quality of care, but may also contribute to the overall professionalization of mental health services with continued general vigilance about the vulnerabilities that new technologies may bring.

The way in which information technologies can penetrate deeply into the social realm suggests an improvement public “literacy” in mental health; however, more attention needs to be paid to how this vast new psychiatric “literature” will spill over into the fluidity of the internet. AI technologies can provide educational resources and self-assessment tools that help individuals better understand mental health disorders and recognize symptoms early(41,48). Increased levels of mental health literacy may empower individuals to seek help earlier and engage in preventive measures, thereby reducing the overall burden of mental health disorders felt in society(53).

It becomes crucial to address the ethical implications that accompany the integration of artificial intelligence in mental healthcare; society as a whole and mental health professionals will need to address all these challenges seriously. Concerns about data privacy, the potential biases of algorithmic outcomes, and the potential for dehumanizing care need to be carefully managed to ensure that AI technologies improve rather than compromise the quality of mental health services(4,7); the boundary between the two possible directions will be difficult to draw, and new positioning of current systems may engender virulent controversy. Establishing ethical guidelines and promoting transparency in AI applications will be vital for maintaining public trust and ensuring that these technologies serve the best interests of individuals and communities(10,52).

The introduction of AI technologies in mental health care is poised to bring about significant societal changes, including increased accessibility, normalization of mental health discussions, personalized care, emergence of new professions, and improved mental health knowledge and outreach. While these changes present exciting opportunities for improving mental health outcomes, it is essential to navigate the ethical challenges that arise to ensure that AI serves as a beneficial tool in promoting the mental well-being of the greatest number of people.

Hoping to reduce stigma of mental problems

The introduction of artificial intelligence technologies in mental healthcare has the potential to significantly modulate the stigmatization of mental health problems. By improving accessibility and mental health literacy and by encouraging positive interactions with mental health services, AI can contribute to a cultural shift in the way mental health problems are perceived and addressed.

One of the main ways the artificial intelligence can reduce stigma is through increased accessibility to mental health resources, a fairly credible promise of information systems. AI-powered platforms, such as chatbots and assistive apps or even remote therapy, can provide immediate assistance and information to people seeking help. This accessibility allows people to interact with mental health services in a private and non-threatening environment, reducing the fear of judgment that often accompanies traditional help-seeking behaviors(54,55). As individuals become more comfortable with this way of accessing such resources, a normalization of help-seeking for mental health problems may occur, thereby decreasing the stigma associated with mental illness(56). This process may seem like a logical one; however, a whole host of issues will need to be intuited, identified and subsequently addressed.

AI technologies can improve general mental health literacy through the potential to reach broad categories of people in the general population. By providing personalized educational content and resources, AI can help individuals better understand mental health disorders, their symptoms, and the importance of seeking help(57). A positive impact on the awareness of mental health problems is expected with appropriate introduction of new technologies. Increasing mental health literacy mitigates stigma, as people who are more informed about mental health problems are less likely to hold negative stereotypes and prejudices(58,59); effectively amending some of the behaviors that maintain stigma of mental illness could be an old desideratum, but one that finds new resources to address. Artificial intelligence can facilitate targeted educational campaigns that address specific misconceptions about mental health, thereby encouraging more informed and compassionate public attitudes(60).

Artificial intelligence can promote positive interactions with mental health services, which are key to reducing stigma. Research indicates that repeated positive encounters with people experiencing mental health problems can effectively reduce stigma(59,61). AI technologies can facilitate these interactions by creating supportive virtual communities where individuals can share their experiences and receive encouragement from peers(62). Such platforms can help to re-humanize mental health problems and foster empathy, even if the environment is virtual and interactions are some technologically mediated, thus interrogating negative stereotypes that contribute to stigmatization

The use of AI can have positive influences on the therapeutic pathway by monitoring and evaluating mental health interventions; in this way, by using large datasets, valuable data can be generated about the effectiveness of stigma reduction strategies. By analyzing patterns in help-seeking behavior and treatment outcomes, AI can help to identify interventions that are most successful in reducing stigma and improving access to care(63,64). Conceptualizing stigma has been a common theme across social sciences and medicine, and the next step seems to be to rewrite the defining principles in a way that is applicable using new technologies. This data-driven approach can increase the ultimate quality of future policies and programs aimed at combating stigma by ensuring that efforts are evidence-based and effectively targeted to beneficiaries.

The full range of potentially positive implications cannot be effectively operationalized without addressing the ethical implications associated with artificial intelligence in mental health care. Privacy concerns, data security and the potential for algorithmic bias must be carefully managed to maintain public trust(65,66). Ensuring that AI technologies are developed and deployed with ethical considerations in mind will be crucial for their acceptance and effectiveness in reducing stigmatization.

The introduction of AI technologies in mental healthcare has the potential to significantly modulate the stigmatization of mental health problems. By increasing accessibility, improving mental health literacy, promoting positive interactions and providing evidence-based information, artificial intelligence can contribute to a cultural shift that promotes understanding and acceptance of mental health problems. As society becomes more informed and compassionate, the stigma around mental health is likely to diminish, leading to improved outcomes for people facing mental health challenges. It remains within the remit of mental health professionals to work together to realize the positive potential of new technologies.

Conclusions

Following the sections of this paper, we can formulate a number of conclusions that we summarize in the following. The general picture we obtained through this qualitative analysis proves that the intersection between psychiatry and artificial intelligence technologies is an emerging and growing topic that will require attention from professionals in order to effectively integrate them with the aim of increasing human well-being and making health systems more efficient.

Artificial intelligence has the potential to improve psychiatric assessment, but does not replace human expertise. Artificial intelligence offers new ways of analyzing mental health, in particular through language processing and behavioral data analysis. While it can contribute to faster and more accurate diagnosis, AI cannot completely replace human interaction, but is rather a complementary tool for professionals in the field. Precision psychiatry is becoming an achievable goal through AI. By analyzing complex datasets and developing predictive models, artificial intelligence can enable more accurate risk stratification and more effective personalization of treatments. However, the success of these technologies depends on properly integrating them into practice and maintaining a patient-centered approach.

Artificial intelligence can facilitate more precise and personalized psychiatric treatments. The integration of AI in psychiatry promises to significantly improve diagnosis and treatment, allowing interventions to be personalized based on genetic, clinical and behavioral as well as social factors. Machine learning algorithms can optimize treatment selection, reducing empirical trial-and-error approaches. AI technologies can expand accessibility and continuous patient monitoring. Chatbots and AI-based therapeutic platforms can provide real-time support, creating greater accessibility to mental health services. Also, the use of smart devices to monitor emotions and behavior allows for quick adjustments in therapeutic strategies, increasing treatment effectiveness.

Transforming public health through the use of artificial intelligence has the potential to profoundly reshape public health, facilitating early detection of mental disorders, personalization of treatments, and implementation of more effective interventions at the community level. Increased accessibility can be followed by increased efficiency. By analyzing large volumes of data and developing personalized strategies, AI can improve patient care and reduce the stigma of mental illness by promoting a culture of awareness and support.

The integration of artificial intelligence into mental health care is driving the emergence of new roles and even the emergence of new mental health professions, such as AI mental health specialists, digital mental health coaches, ethics consultants and data analysts. These professions will contribute to better implementation and use of AI technologies for the benefit of patients.

The need for reform in medical education seems increasingly necessary. To cope with new demands, medical training systems need to integrate AI education, preparing future professionals to navigate both the technological and therapeutic aspects of mental health care.

Accessibility and normalization can be major advantages of using new technologies. The integration of artificial intelligence in mental health can reduce barriers to accessing services, facilitating digital support and helping to reduce the stigmatization of mental disorders by promoting open discussions with a large number of individuals. Artificial intelligence can enable care that is more tailored to individual needs, which can improve patient adherence to treatment. In parallel, new professions specializing in the use and management of these technologies in mental health are emerging. These dynamics may contribute to processes of personalization of intervention and new forms of professionalization in mental health care.

Artificial intelligence can facilitate access to mental health resources in a private and non-stigmatizing way, contributing to a better understanding of mental health problems and reducing the stereotypes associated with them; we can speak of a new form of psychiatric literacy. AI platforms can create safe spaces for sharing experiences and support, helping to change public perceptions and normalize help-seeking for mental health problems, thereby promoting positive interactions in the health system.

The adoption of artificial intelligence brings benefits, but also ethical and practical challenges that need to be interrogated with a view to beneficial implementations for patients. Although artificial intelligence can reduce subjectivity in psychiatric assessments and facilitate more accurate diagnoses, there are significant risks related to data privacy, algorithm bias and decision transparency. The transition to widespread use requires regulatory measures and appropriate education for professionals. Interdisciplinary collaboration is essential to ensure responsible and ethical use of technology in mental health. AI ethics consultants and clear regulations will be essential to maintain trust in AI systems applied in mental health. Managing ethical challenges appears to become important for artificial intelligence to be effective in reducing stigma; responsible implementation, with attention to privacy, data security and algorithmic fairness, is essential so that public trust is maintained.    

 

Corresponding author: Radu-Mihai Dumitrescu E-mail: dum_mihu@yahoo.com

Conflict of interest: none declared.

Financial support: none declared.

This work is permanently accessible online free of charge and published under the CC-BY licence.

Figure:

Bibliografie


 

  1. Starke G, Schmidt B, De Clercq E, Elger BS. Explainability as fig leaf? An exploration of experts’ ethical expectations towards machine learning in psychiatry. AI Ethics. 2023;3(1):303-314. 

  2. Salcedo ZB V. Artificial Intelligence and Mental Health Issues: A Narrative Review. J Public Heal Sci. 2023;2(02):58-65. 

  3. Graham S, Depp CA, Lee E, et al. Artificial Intelligence for Mental Health and Mental Illnesses: An Overview. Curr Psychiatry Rep. 2019;21(11):116. 

  4. Burr C, Morley J, Taddeo M, Floridi L. Digital Psychiatry: Ethical Risks and Opportunities for Public Health and Well-Being. SSRN Electron J. 2019;1(1)21-33. 

  5. Lovejoy CA, Buch V, Maruthappu M. Technology and mental health: The role of artificial intelligence. Eur Psychiatry. 2019;55:1-3. 

  6. Pham KT, Nabizadeh A, Selek S. Artificial Intelligence and Chatbots in Psychiatry. Psychiatr Q. 2022;93(1):249-253. 

  7. Carr S. ‘AI Gone Mental’: Engagement and Ethics in Data-Driven Technology for Mental Health. J Ment Heal. 2020;29(2):125-130. 

  8. Chen Z, Kulkarni P, Galatzer‐Levy IR, Bigio B, Nasca C, Yu Z. Modern Views of Machine Learning for Precision Psychiatry. Patterns. 2022;3(11):100602. 

  9. Ray A, Bhardwaj A, Malik YK, Singh S, Gupta R. Artificial Intelligence and Psychiatry: An Overview. Asian J Psychiatr. 2022;70:103021. 

  10. Doraiswamy PM, Blease C, Bodner KA. Artificial Intelligence and the Future of Psychiatry: Insights from a Global Physician Survey. Artif Intell Med. 2020;102:101753. 

  11. Stern N. The Role of Artificial Intelligence Algorithms in Psychiatry: Advancing Diagnosis and Treatment (Preprint). 2023. doi:10.2196/preprints.49343

  12. McCradden MD, Hui K, Buchman DZ. Evidence, Ethics and the Promise of Artificial Intelligence in Psychiatry. J Med Ethics. 2022;49(8):573-579. 

  13. Blease C, Locher C, Leon-Carlyle M, Doraiswamy PM. Artificial Intelligence and the Future of Psychiatry: Qualitative Findings from a Global Physician Survey. Digit Heal. 2020;6:2055207620968355. 

  14. Petrovic S, Maric NP. Improvement of the Psychiatric Care Through Outsourcing Artificial Intelligence Technologies: Where Are We Now?. Med Istraz. 2022;55(2):19-29. 

  15. Ejaz H, McGrath H, Wong BLH, Guise A, Vercauteren T, Shapey J. Artificial Intelligence and Medical Education: A Global Mixed-Methods Study of Medical Students’ Perspectives. Digit Heal. 2022;8:205520762210890. 

  16. Teng M, Singla R, Yau O, et al. Health Care Students’ Perspectives on Artificial Intelligence: Countrywide Survey in Canada. Jmir Med Educ. 2022;8(1):e33390. 

  17. Philipp K, Chandler J, Cabrera L, et al. Neuroethics at 15: The Current and Future Environment for Neuroethics. AJOB Neurosci. 2019;10(3):104-110. 

  18. Rocheteau E. On the Role of Artificial Intelligence in Psychiatry. Br J Psychiatry. 2022;222(2):54-57. 

  19. Oltmanns JR. Incremental Validity of Language-Based Assessments of Personality in World Trade Center Responders. 2023. doi:10.31234/osf.io/7graf

  20. Arowosegbe A, Oyelade T. Application of Natural Language Processing (NLP) in Detecting and Preventing Suicide Ideation: A Systematic Review. Int J Environ Res Public Health. 2023;20(2):1514. 

  21. Elyoseph Z, Levkovich I. Beyond Human Expertise: The Promise and Limitations of ChatGPT in Suicide Risk Assessment. Front Psychiatry. 2023;14:1213141. 

  22. Welch V, Wy TJ, Ligęzka A, et al. Use of Mobile and Wearable Artificial Intelligence in Child and Adolescent Psychiatry: Scoping Review. J Med Internet Res. 2022;24(3):e33560. 

  23. Xie YT, Yang YJ. Research fronts and researchers of World Journal of Psychiatry in 2023: A visualization and analysis of mapping knowledge domains. World J Psychiatry. 2024;14(7):1118-1126.

  24. Meehan AJ, Lewis SJ, Fazel S, et al. Clinical Prediction Models in Psychiatry: A Systematic Review of Two Decades of Progress and Challenges. Mol Psychiatry. 2022;27(6):2700-2708. 

  25. Lin E, Lin CC, Lane HY. Precision Psychiatry Applications with Pharmacogenomics: Artificial Intelligence and Machine Learning Approaches. Int J Mol Sci. 2020;21(3):969. 

  26. Wiest IC, Verhees FG, Ferber D, et al. Detection of suicidality from medical text using privacy-preserving large language models. Br J Psychiatry. 2024;225(6):532-537.

  27. Chen L, Zhang J, Zhu Y, Shan J, Zeng L. Exploration and Practice of Humanistic Education for Medical Students Based on Volunteerism. Med Educ Online. 2023;28(1):2182691. 

  28. Stein DJ, Shoptaw S, Vigo D, et al. Psychiatric Diagnosis and Treatment in the 21st Century: Paradigm Shifts Versus Incremental Integration. World Psychiatry. 2022;21(3):393-414. 

  29. Gao W. Editorial: AI Approach to the Psychiatric Diagnosis and Prediction. Front Psychiatry. 2024;15. doi:10.3389/fpsyt.2024.1387370.

  30. Janardan V. The Transformative Role of Artificial Intelligence in Psychiatry: Enhancing Diagnosis and Treatment. Arch Psychiatry. 2024;2(1):20-22. 

  31. Okpete UE. Challenges and Prospects in Bridging Precision Medicine and Artificial Intelligence in Genomic Psychiatric Treatment. World J Psychiatry. 2024;14(8):1148-1164. 

  32. Tran BX, McIntyre RS, Latkin CA, et al. The Current Research Landscape on the Artificial Intelligence Application in the Management of Depressive Disorders: A Bibliometric Analysis. Int J Environ Res Public Health. 2019;16(12):2150. 

  33. Steppan M. Machine Learning Facial Emotion Classifiers in Psychotherapy Research: A Proof-of-Concept Study. Psychopathology. 2023;57(3):159-168. 

  34. Abd‐Alrazaq A, Alhuwail D, Schneider J, et al. The Performance of Artificial Intelligence-Driven Technologies in Diagnosing Mental Disorders: An Umbrella Review. NPJ Digit Med. 2022;5(1). 

  35. Maslej MM, Kloiber S, Ghassemi M, Yu J, Hill S. Out With AI, in With the Psychiatrist: A Preference for Human-Derived Clinical Decision Support in Depression Care. Transl Psychiatry. 2023;13(1):210. 

  36. Fiske A, Henningsen P, Buyx A. Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. J Med Internet Res. 2019;21(5):e13216. 

  37. Glaz AL, Haralambous Y, Kim-Dufor DH, et al. Machine Learning and Natural Language Processing in Mental Health: Systematic Review. J Med Internet Res. 2021;23(5):e15708. 

  38. Huckvale K, Venkatesh S, Christensen H. Toward Clinical Digital Phenotyping: A Timely Opportunity to Consider Purpose, Quality, and Safety. NPJ Digit Med. 2019;2(1):88. 

  39. Khare M. Utilising Artificial Intelligence (AI) in the Diagnosis of Psychiatric Disorders: A Narrative Review. J Clin Diagnostic Res. 2024;18(4):E01-OE05. 

  40. Rana U. The Role of Artificial Intelligence in Mental Health Care. 2023. doi:10.31235/osf.io/r4umy.

  41. Oladimeji KE. Impact of Artificial Intelligence (AI) on Psychological and Mental Health Promotion: An Opinion Piece. New Voices Psychol. 2023;13. doi:10.25159/2958-3918/14548.

  42. Su C, Xu Z, Pathak J, Wang F. Deep Learning in Mental Health Outcome Research: A Scoping Review. Transl Psychiatry. 2020;10(1):116. 

  43. Cahyo LM, Astuti SD. Early Detection of Health Problems Through Artificial Intelligence (Ai) Technology in Hospital Information Management: A Literature Review Study. J Med Heal Stud. 2023;4(3):37-42. 

  44. Omar M. Applications of Large Language Models in Psychiatry: A Systematic Review. Front Psychiatry. 2024;15:1422807. 

  45. Nilsén P, Svedberg P, Nygren JM, Frideros M, Johansson J, Schueller SM. Accelerating the Impact of Artificial Intelligence in Mental Healthcare Through Implementation Science. Implement Res Pract. 2022;3:26334895221112033.

  46. Benjamens S, Dhunnoo P, Meskó B. The State of Artificial Intelligence-Based FDA-approved Medical Devices and Algorithms: An Online Database. NPJ Digit Med. 2020;3(1):118. 

  47. Anita AS. The Role of Artificial Intelligence as a Tool to Help Counselors in Improving Mental Health. BICC_Proceedings. 2024;2:119-124. 

  48. Packness A, Halling A, Simonsen E, Waldorff FB, Hastrup LH. Are Perceived Barriers to Accessing Mental Healthcare Associated with Socioeconomic Position Among Individuals with Symptoms of Depression? Questionnaire-Results from the Lolland-Falster Health Study, a Rural Danish Population Study. BMJ Open. 2019;9(3):e023844. 

  49. Shatte A, Hutchinson D, Teague S. Machine Learning in Mental Health: A Scoping Review of Methods and Applications. Psychol Med. 2019;49(09):1426-1448. 

  50. Chen ZS, Kulkarni PP, Galatzer-Levy IR, Bigio B, Nasca C, Zhang Y. Modern views of machine learning for precision psychiatry. Patterns (N Y). 2022;3(11):100602

  51. Dutta D. Bots for Mental Health: The Boundaries of Human and Technology Agencies for Enabling Mental Well-Being Within Organizations. Pers Rev. 2023;53(5):1129-1156. 

  52. Monteith S, Glenn T, Geddes J, Whybrow PC, Achtyes ED, Bauer M. Expectations for Artificial Intelligence (AI) in Psychiatry. Curr Psychiatry Rep. 2022;24(11):709-721. 

  53. Ter Harmsel JF, Smulders LM, Noordzij ML, et al. Forensic Psychiatric Outpatients’ and Therapists’ Perspectives on a Wearable Biocueing App (Sense-IT) as an Addition to Aggression Regulation Therapy: Qualitative Focus Group and Interview Study. JMIR Form Res. 2023;7:e40237.

  54. Miner AS, Shah NH, Bullock K, Arnow BA, Bailenson JN, Hancock J. Key Considerations for Incorporating Conversational AI in Psychotherapy. Front Psychiatry. 2019;10:746.

  55. Farsi Z, Taghva A, Butler S, Tabesh H, Javanmard Y, Atashi A. Stigmatization Toward Patients with Mental Health Diagnoses: Tehran’s Stakeholders’ Perspectives. Iran J Psychiatry Behav Sci. 2020;14(3):e93851. 

  56. Wright S, Henderson C, Thornicroft G, Sharac J, McCrone P. Measuring the Economic Costs of Discrimination Experienced by People with Mental Health Problems: Development of the Costs of Discrimination Assessment (CODA). Soc Psychiatry Psychiatr Epidemiol. 2014;50(5):787-795. 

  57. Brown C, Conner KO, Copeland VC, et al. Depression Stigma, Race, and Treatment Seeking Behavior and Attitudes. J Community Psychol. 2010;38(3):350-368. 

  58. Bracke P, Delaruelle K, Verhaeghe M. Dominant Cultural and Personal Stigma Beliefs and the Utilization of Mental Health Services: A Cross-National Comparison. Front Sociol. 2019;4:40.

  59. Parcesepe AM, Cabassa LJ. Public Stigma of Mental Illness in the United States: A Systematic Literature Review. Adm Policy Ment Heal Ment Heal Serv Res. 2012;40(5):384-399. 

  60. Wagstaff C, Graham HL, Salkeld R. Qualitative Experiences of Disengagement in Assertive Outreach Teams, in Particular for “Black” Men: Clinicians’ Perspectives. J Psychiatr Ment Health Nurs. 2017;25(2):88-95. 

  61. Liu X. Stigma and Emotional Distress in Chinese Mental Health Professionals: Moderating Role of Cognitive Fusion. Stigma Heal. 2024;9(2):201-211. 

  62. Clément S, Schauman O, Graham T, et al. What Is the Impact of Mental Health-Related Stigma on Help-Seeking? A Systematic Review of Quantitative and Qualitative Studies. Psychol Med. 2014;45(1):11-27. 

  63. Williston SK, Bramande E, Vogt D, Iverson KM, Fox AB. An Examination of the Roles of Mental Health Literacy, Treatment-Seeking Stigma, and Perceived Need for Care in Female Veterans’ Service Use. Psychiatr Serv. 2020;71(2):144-150. 

  64. López V, Sanchez K, Killian M, Eghaneyan BH. Depression Screening and Education: An Examination of Mental Health Literacy and Stigma in a Sample of Hispanic Women. BMC Public Health. 2018;18(1):646. 

  65. Roh S, Burnette CE, Lee KH, Lee YS, Martin JI, Lawler MJ. Predicting Help-Seeking Attitudes Toward Mental Health Services Among American Indian Older Adults. J Appl Gerontol. 2016;36(1):94-115. 

  66. Chen Z, Liu X, Yang Q, et al. Evaluation of Risk of Bias in Neuroimaging-Based Artificial Intelligence Models for Psychiatric Diagnosis. Jama Netw Open. 2023;6(3):e231671. 

Articole din ediția curentă

CASE REPORT

Treatment of post-traumatic brain injury using high-dose neurotrophic factors – a case report study

Daniel Ungureanu, Denisa Gliția, Roland Stretea, Cătălina-Angela Crișan
Introducere. Demența posttraumatism cranian cerebral (TCC) este o afecțiune neurodegenerativă severă, însoțită sau nu de perturbări ale comportamentului, care poate schimba dramatic viața pacienților ...
MEDICAL ETHICS

The importance of medical ethics and communication in the current healthcare context

Eduard-Cristian Enache, Andreea Sălcudean
Relațiile de muncă în domeniul medical sunt influențate de factori precum satisfacția profesională, etica și comunicarea eficientă....
REVIEW

The influence of psychiatric disorders on heart failure

Raluca Pretorian
Îmbătrânirea populației în Europa şi creșterea supraviețuirii au condus la o prevalență sporită a insuficienței cardiace cronice....
Articole din edițiile anterioare

REVIEW

Utilizarea medicamentelor psihoactive şi condusul autovehiculelor – o întoarcere la raţional

Radu-Mihai Dumitrescu
Relația dintre medicamentele psihoactive și dreptul de a conduce este complexă și necesită o evaluare nuanțată....