Legal implications of physician liability in the use of artificial intelligence for diagnosis and treatment
Implicaţiile juridice ale răspunderii medicului în utilizarea inteligenţei artificiale pentru diagnostic şi tratament
Abstract
Artificial intelligence (AI), with its multiple advantages such as improved and accurate diagnosis and reduced workload for physicians, is becoming more widespread, studied and applied in medicine. However, after studying the literature, its use raises ethical and legal questions to which there is still no unanimous answer. There is the question of medical liability (malpractice) in the case of errors related to AI and patients’ damages. The application of the AI diagnostic algorithm raises questions about the risks of its use in the diagnosis and treatment of cancer (especially in rare cases) and about the information provided to the patient. There are also discussions about the impact on the empathic doctor-patient relationship. The use of AI in the medical field has produced a revolution in the doctor-patient relationship, but it also has possible medico-legal consequences. At the moment, the legal regulatory framework on medical liability when applying artificial intelligence is inadequate and requires urgent action, as there is no specific and unitary legislation regulating the liability of the different parties involved in the application of AI, nor on the end-users. Thus, more attention needs to be paid to the risk of applying artificial intelligence, the requirement to regulate its safe use, and the maintenance of patient safety standards by constantly adapting and updating the system.Keywords
artificial intelligence (AI)diagnostic algorithmmalpracticelegal regulationRezumat
Inteligenţa artificială (IA), prin multiplele sale avantaje, precum îmbunătăţirea şi acurateţea diagnosticului şi reducerea volumului de muncă al medicilor, este din ce în ce mai răspândită, studiată şi aplicată în medicină. Dar, după studierea literaturii de specialitate, utilizarea ei ridică întrebări etice şi juridice la care nu există încă niciun răspuns unanim. Se pune problema răspunderii medicale (malpraxis) în cazul erorilor legate de IA şi despre daunele pacientului. Aplicarea algoritmului de diagnostic al IA ridică întrebări legate de riscurile utilizării în diagnosticul şi tratamentul cancerului (mai ales în cazurile rare) şi despre informaţiile oferite pacientului. De asemenea, sunt discuţii despre impactul asupra relaţiei empatice medic-pacient. Practic, utilizarea inteligenţei artificiale în domeniul medical a condus la o revoluţie în relaţia medic-pacient, dar care are şi posibile consecinţe medico-legale. În acest moment, cadrul de reglementare juridică privind răspunderea medicală atunci când se aplică IA este inadecvat şi necesită măsuri urgente, deoarece nu există o legislaţie specifică şi unitară care să reglementeze răspunderea diferitelor părţi implicate în aplicarea IA şi nici asupra utilizatorilor finali. Astfel, o atenţie mai mare trebuie acordată riscului aplicării inteligenţei artificiale, cerinţei de a reglementa utilizarea în siguranţă a acesteia şi menţinerea standardelor de siguranţă ale pacientului, prin adaptarea şi actualizarea permanentă a sistemului.Cuvinte Cheie
inteligenţă artificială (IA)algoritm de diagnosticaremalpraxisreglementare juridicăIntroduction
The application of artificial intelligence (AI) algorithms in healthcare is a new situation in today’s medical reality, and their use is helping doctors in both the diagnostic and treatment phases, being increasingly used in hospitals. AI systems enable healthcare professionals to obtain both more accurate and precise diagnoses and more effective and less invasive medical and surgical treatments. Thus, at the moment, artificial intelligence is being used in radiology and oncology for its potential to recognize complex patterns that provide quantitative and qualitative assessments, improving diagnostic prediction(1). In addition to the clinical and treatment advantages, artificial intelligence, through risk stratification models, can optimize the resource allocation plan, being useful for patient management, but also for research and clinical trials(1). Currently, the advantages of AI in medicine are widely applied, while the legal implications and regulations are still being discussed. Human-machine interaction is debatable, especially when AI systems make autonomous choices, which raises issues of legal liability in case of harm to patients. New technological approaches bring about a new reality that is unlikely to fit within the confines of current legislation. In this context, this paper aims to address both the critical aspects of the use of artificial intelligence algorithms in cancer diagnosis and treatment and also to discuss the legal liability related to the use of the method and possible solutions(1).
But how can AI revolutionize medicine? Computer systems can already independently analyze the data that exist online in the healthcare system, learn from it based on algorithms and metrics provided by the treating physician, and even provide therapy recommendations. What opportunities arise for patients and what studies are needed to apply artificial intelligence in practice? Some parameters are generated all the time by the healthcare system: values of blood pressure, laboratory blood samples, oxygen saturation, ultrasound interpretations, and results of CT, MRI or PET-CT scans. These data are evaluated and used for diagnosis and then archived(2). Could computer systems independently learn from the data collected from patients and even develop the diagnosis and treatment system? The development and application of AI in medicine will be one of the important topics in healthcare research in the future.
Where are we, and what are the prospects? Can the computer learn “controlled”? Artificial intelligence represents the ability of computer programs to learn. There are two possibilities for learning. The first is “supervised learning”. Thus, a large number of similar things are presented to the computer. For example, a large number of imaging exams by computed tomography (CT), mammography etc., thus teaching which images are normal or suspicious of neoplasm, depending on the images provided and their description. If the computer receives enough information, it will be able to distinguish between the images/data provided. Thus, it can not only make the doctors’ work easier but can also improve the quality of diagnosis and personalized treatment. It is worth noting that, as for computer-assisted detection software, it can increase the accuracy and speed of diagnoses established by radiologists, but it acts as a “second set of eyes”, and the effectiveness of algorithms increases when they are subject to human supervision(2). Artificial intelligence can lead to a reduction in the time spent by specialists on diagnosis and to an increase in the possibility of examining a larger number of patients(1). For example, for breast cancer, AI can lead to a rapid diagnosis of the disease(1). Artificial intelligence technologies should be considered complementary to imaging diagnostics, increasing the performance of doctors, reducing the risk of human errors, and being a clinical decision support. Moreover, AI can filter simple cases and allow specialists to focus more on more difficult cases. The advantages of artificial intelligence could also include its use in cancer screening (breast, cervical, colorectal etc.) and the early identification of people at high risk of developing this disease(1-5). Applications are tools that help with diagnosis, and are not substitutes for clinicians. In addition to clinical diagnosis, the use of AI could help surgeons during the intervention and lead to more precise minimally invasive surgical techniques(1).
Some questions can thus be asked: what is the impact on the doctor-patient relationship? Can the use of AI lead to a reduced ability to empathize with patients? Can the application of AI lead to a lower interest in the patient’s history? Is the clinical medical examination still useful? Is a decrease in the quality of medical care possible? However, questions also arise regarding legal liability in the event of diagnostic errors due to the lack of a physical examination. Will AI and machine learning ever be able to replace “a skilled and empathetic clinician at the patient’s bedside”? Yes, that would be the answer for now, but also for the future, because artificial intelligence would repetitively do the same thing, namely see patients, and establish the diagnosis and treatment options, without getting tired and without losing patience, for a large number of patients. Does this mean that doctors will lose some of their jobs? Will young doctors still try to specialize in specialties where AI can establish the diagnosis on its own (radiology)? This also has a downside: full trust in artificial intelligence. And then how is medical negligence established? But what about malpractice?
Doctors will need to inform patients about the benefits, risks and limitations of using AI, and ask for their consent. Can the data provided be used in lawsuits? What and how much of the medical information can be disclosed to patients? The critical issues related to the use of AI in medicine are represented by privacy and cybersecurity: the effectiveness of the software relies on the use of large amounts of data, some of which may impact patients’ privacy(1).
Liability for AI errors is an important issue and, despite numerous articles, there is currently no unanimous and definitive answer to this issue. Artificial intelligence is a new technology and the legal sanction applicable to it is not yet well developed. In the case of a malpractice lawsuit, it will be necessary to see whether there is a breach of duty and deviation from the standard of care for the patient. But, in context, AI could suffer a failure in the use of algorithms, in programming, and in guiding the actions of doctors(5). The following dilemma is emerging: if for medical devices the manufacturer is liable in case of failure and patient harm, in the case of artificial intelligence use, is the doctor, the one who approves the AI results, liable for them? Moreover, when AI starts to operate autonomously (machine learning; ML), without specialist doctors, based on a self-learning algorithm, which can make a diagnosis without the need for clinician approval, shouldn’t the system be liable for incorrect results, and shouldn’t the AI programmers or the software company assume at least partial medical liability? They will have to inform patients about the limits of the system’s use and restrict the responsibility of clinicians. If a robotic surgery is performed autonomously, events caused by defects in the software configuration would probably be the developer’s responsibility. At the same time, programmers should not be considered fully liable, because AI is not capable of preventing injuries in all cases. Legislation for artificial intelligence will have to be based on the degree of autonomy of the software, and when AI is used only as a decision support in activity, the doctor who approves it bears the risk of liability, even though it is “indirect”(1,5).
Another discussion is the liability for AI products offered to doctors that involve a defect for which the manufacturer is responsible. The situation presented could be an obstacle to the initially developed algorithm, although it may not be identical to the one causing the damage since it improves over time. And then another question arises: can patients sue the medical device manufacturer, the doctor who recommended the use of AI, and the hospital for malpractice directly? Is it better to have a joint liability of the three actors before the patient than an individual liability? Therefore, the information provided to patients about the risks, benefits, and limits of artificial intelligence use is of paramount importance and should aim to fully and consciously ensure the choices made by them, but also to propose possible alternative paths in case of opposition to new technologies. All of them represent the informed consent which is the legal basis of the patient’s obligation to inform, and is given by: art. 22 of the Romanian Constitution – “Right to life and physical and mental integrity”; art. 4-12 of the Law on Patients Rights no. 46/2003 – “Patient’s right to medical information”; art. 649 and of Law no. 95/2006 – Title XV “Civil liability of medical personnel and the supplier of medical, sanitary and pharmaceutical products and services”; Order 1411/2016 on the amendment and completion of the Order of the Minister of Public Health no. 482/2007 on the approval of the Methodological Norms for the application of Title XV “Civil liability of medical personnel and the supplier of medical, sanitary and pharmaceutical products and services” of Law no. 95/2006 on the reform in the field of healthcare; art. 5 of the Convention on Human Rights and Biomedicine, also known as the Oviedo Convention of 1997; art. 3-5 of the European Charter of Patients’ Rights, signed in Rome in 2002. The topic of consent is essential, considering the application of personalized treatment, including the use of AI in medicine, which must ensure confidentiality, without discrimination of populations based on ethnicity and gender, and guarantee equal access to fair allocation of resources.
It is obvious that the accelerated application of artificial intelligence in medicine will lead to appeals/complaints regarding medical negligence and that, for this reason, on the one hand, the legal system should provide clear solutions regarding which entity holds the liability, but on the other hand, the medical malpractice insurance system should say what are the terms of coverage if healthcare decisions are partially/made by artificial intelligence.
There is a lot of personal information in the media, but when discussing access to medical data to be used for AI research and development, numerous obstacles must be overcome. Personal data protection (GDPR – General Data Protection Regulation) is extremely important but, considering this alone, progress in AI will not be possible. There are not the name and personal data that matter, the medical information obtained from patients is important, being difficult to collect medical data for research if it is not available! However, in many countries in the world, the involvement of politicians and lawyers in solving this problem is noted, with a real will for AI progress(6-8). The following questions are asked: what are the limits of the use of AI? Is there a legal regulation regarding its use in the diagnosis and treatment of cancer? In this regard, the European Parliament adopted the “Law on Artificial Intelligence” which guarantees safety and respect for fundamental human rights and stimulates innovation. The law aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability against high-risk AI systems, encourage innovation and ensure a leading role for Europe in the field”(6,7). The regulation imposes obligations in the case of the use of artificial intelligence, depending on its potential risks and expected impact. The law responds to the proposals for “a safe and trustworthy society, including by combating disinformation and ensuring that people are ultimately in control, ensuring human oversight, but also the reliable and responsible use of AI”. The Framework Convention will be open to ratification at the international level and allow countries around the world to adhere to and comply with the established ethical and legal norms, according to the vision of the UN General Assembly resolution of 21 March 2024(6). The Government of Romania also joined this legislation by publishing in the Official Gazette of 25.07.2024 the Government Decision no. 832/2024 on the approval of the National Strategy in the field of artificial intelligence(8).
From a legislative point of view, one of the critical points of legal regulations is the difficulty of adapting to new realities. We believe that regulation is considered evolved to the extent that it manages to capture its events, which are different from typical regulations, that it can anticipate events that may occur in the future and that can be subsumed by it: this can only be achieved if it includes broad definitions and, at the same time, clear, precise and comprehensive recommendations.
Discussion
AI in cancer diagnosis and treatment (clinical decision support systems – CDS)
More policy and legislative options could ensure a more balanced liability system, including modifying the standard of care, medical insurance, compensation, and legal regulations. Practically, with an adequate legislative framework, doctors could facilitate the safe and rapid implementation of AI, but also of its autonomous learning (machine learning; ML), ensuring modern and efficient patient care(9). In fact, since 2017, in the USA, the FDA (Food and Drug Administration) has begun to approve the use of artificial intelligence in medicine (radiographs, biopsies – prostate cancer etc.)(10), but the question arises as to who bears the responsibility for incidents/inconveniences caused to patients. The solution will depend on how much innovation in AI/ML is opportune, and its balanced introduction into the healthcare system will also take into account compensation for injured patients. The level of responsibility directly influences the level of development and implementation of clinical algorithms. Increased responsibility can discourage doctors, healthcare systems and AI designers, preventing or delaying the development and implementation of these algorithms. Research in this field is also discouraged by increasing costs(9). AI/ML systems are improving and advancing at a rapid pace, but the legal system, which moves more slowly, must adapt to the new reality. Thus, a new responsibility appears that must adapt to medical progress. The legal system must adapt and balance progress/responsibility to promote innovation, safety and adoption of these new diagnostic and treatment methods(9).
Artificial intelligence processes and uses various algorithms to find complex correlations in massive data sets (analytics). Machine learning are systems that use algorithms to learn from large amounts of data to make predictions, without explicit programming, and that are trained by correcting minor algorithmic errors (training), thus boosting the prediction and accuracy of the model. Ethically, the question arises whether the application of AI/ML still meets the medical standards of the Hippocratic Oath(11). However, science is advancing and systems such as the Artificial Intelligent System (AIS) from IBM Watson – program for oncology (IBM Watson for Oncology SaaS – 5725-W51) have the mission to support oncologists and, thus, directly influence clinical decision-making. With over 30 billion images available for analysis, AIS could evaluate the information provided for the diagnosis of neoplastic disease and recommend personalized treatment for the patient. The latest capabilities include an expanded coverage that includes breast, lung, colorectal, gastric, and cervical cancer(12-14). Thus, Zou et al. (2020) show that recommendations for the diagnosis and treatment of cervical cancer using WFO (Watson for Oncology) were 72.8% consistent with real clinical practice. Factors such as cancer location, but also individual factors such as the type of chemotherapy used, the chosen surgical procedure, and adjuvant/neoadjuvant treatment, limit its wider application. The conclusion was that WFO cannot replace oncologists (tumor board) in supporting the diagnosis and recommending personalized treatment for patients with cervical cancer, but it could be an effective tool in decision-making and standardization of therapy(13). Regarding the concordance of diagnosis and treatment established through WFO, for breast cancer it was the highest (88.99%), for gastric cancer it was the lowest (57.94%), and the concordance rate in stages I-III is higher than in stage IV(14). It is worth noting that, while an oncology committee (tumor board) takes between 12 and 20 minutes to establish a diagnosis and recommend a personalized treatment, the WFO system needs 40 seconds to do the same, and it can provide consultations to an unlimited number of patients(15). Some authors believe that, soon, AI should be included in the diagnostic and treatment guidelines (kidney cancer, prostate cancer etc.), thus revolutionizing the decision-making(16). In fact, by 2030, artificial intelligence is estimated to influence 14% of medical activity, intervening in the analysis of electronic medical records, processing laboratory data and analyzing medical imaging, thus providing rapid diagnostics and therapeutic solutions, all achieved quickly with a high cost/efficiency ratio. The data obtained can be used in medical research and the production of new drugs(17,18).
AI and robotic surgery
As for surgical interventions, according to forecasts, soon robots equipped with artificial intelligence could replace doctors in treating patients(17). The use of AI can be divided into two branches: a component with material application, such as surgical robots, and a virtual component that we described in the material above, namely the electronic patient file – establishing the diagnosis and treatment (clinical decision support systems; CDS)(17,18). Even today, the undeniable advantages of robotic surgery are known, and it has been successfully applied in oncological surgery, such as high precision and increased control of the intervention, improved visualization (3D), reduced intraoperative blood loss, fewer postoperative complications, faster patient recovery, and shorter hospitalization period.
Robots orient the surgeon in three-dimensional space by tracking the movements of instruments during the surgical procedure, using optical or electromagnetic sensors. The computer and robot alert the surgeon to the precise location and spatial orientation of the surgical instruments, providing feedback on the danger to surrounding anatomical structures and the placement and orientation of the surgical instruments. Many robots have “haptic” sensors – which provide increased resistance to tissue movement – to provide feedback to the surgeon during surgery. Thus, the robots define haptic boundaries for the surgical instruments and provide haptic feedback when the surgeon deviates from the safety zone created by the preoperative surgical planning in the form of tactile or auditory and visual alerts, preventing injury to organs outside the operative field(18). Today’s robots are semi-automated, assisting the surgeon in performing extensive minimally invasive surgical maneuvers in three-dimensional space, positioning the instruments, but having a “passive” attitude, although, theoretically, the robots could perform some maneuvers. The top five robotic surgery systems are: (1) da Vinci by Intuitive Surgical; (2) Ion by Intuitive Surgical; (3) Mako by Stryker; (4) NAVIO by Smith Nephew; and (5) Monarch by Auris Health(18,19). These are successfully used in colorectal cancer surgery, gynecological oncology, prostate and renal cancer, lung cancer, etc.(18)
Conclusions
In medicine, through the use of appropriate algorithms, artificial intelligence will be increasingly used and, therefore, it must be used with moral responsibility. Artificial intelligence cannot completely replace clinical reasoning, but it can help doctors make the best decisions. Although there are moral dilemmas in the use of AI, science must progress for the benefit of people, of course within an appropriate legislative framework. Artificial intelligence is revolutionizing the medical care, but it can create new responsibilities for manufacturers and users. Artificial intelligence is already used for the diagnosis and treatment of cancer, with surgical robots equipped with AI being increasingly present in treatment procedures (prostatectomies etc.).
Of course, with the increasing use of artificial intelligence, the risks and liability of using the procedure also increase. In order not to reduce its use and to make the most of the application of the AI system, the risks and liability must be well defined, so that all parties involved (patient, doctor, device) understand their responsibilities and the implicit legal liability when the technology inevitably causes harm. Current and future health insurance must be prepared for new types of malpractice, keeping pace with the speed of development of AI systems.
Essential elements of patient care (especially oncology), such as clinical examination, compassion, clinical intuition and empathy, make medicine a special science, which will probably determine in many cases in which predominantly human medical care will surpass AI-dominated care. However, in the future, artificial intelligence will play an increasingly important role in medical decision-making. However, artificial intelligence influences liability for medical malpractice by adding new causes and the emergence of new damages. From this point of view, a judge will have to judge new malpractice cases very carefully and determine who is at fault: the doctor, the AI manufacturer, and/or the hospital.
A doctor’s decision to follow or not the AI predictions and use the system will have legal consequences. New problems will arise, such as doctors who are specialized to work with artificial intelligence technology (e.g. robotic surgery with AI) but have never learned to practice without it, will have difficulties when computers fail, which could lead to malpractice liability issues if the doctor is not prepared to complete an operation or resolve a complication, for example, without a robot. Cause-effect relationships will play an important role, as it will be difficult for a judge to analyze who is responsible when the doctor’s responsibilities are shared with AI systems. This is where an expert opinion in the field will probably have to intervene.
Doctors must be the human interface between technology and patient through the responsible use of artificial intelligence in the diagnosis and treatment of a disease especially a neoplastic one, so that important elements such as compassion and empathy do not disappear from medical practice. As we have wanted to emphasize in this paper, we need to prepare the legal framework to provide the parties involved (physician, technology manufacturer, hospital) with the reasonable ability to develop and apply AI and ML systems, to anticipate legal liability issues, so that these technologies can continue to develop rapidly and revolutionize healthcare.
Autori pentru corespondenţă: Virgiliu-Mihail Prunoiu E-mail: virgiliuprunoiu@yahoo.com; Mircea-Nicolae Brătucu E-mail: bratucu_mircea@yahoo.com
CONFLICT OF INTEREST: none declared.
FINANCIAL SUPPORT: none declared.
This work is permanently accessible online free of charge and published under the CC-BY.
Bibliografie
-
Cestonaro C, Delicati A, Marcante B, Caenazzo L, Tozzo P. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review. Front Med (Lausanne). 2023;10:1305756.
-
Munich H. AI in medicine. https://www.helmholtz-munich.de/en/newsroom/research-highlights/ai-in-medicine. Accessed: 06.03.2024.
-
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel). 2024;14(2):174.
-
AI Predicts Future Pancreatic Cancer. AI model spots those at highest risk for up to three years before diagnosis. https://hms.harvard.edu/news/ai-predicts-future-pancreatic-cancer. Accessed: 06.03.2024.
-
Terranova C, Cestonaro C, Fava L, Cinquetti A. AI and professional liability assessment in healthcare. A revolution in legal medicine?. Front Med (Lausanne). 2024;10:1337335.
-
European Parliament. Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy. https://www.europarl.europa.eu/news/ro/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai. Accessed: 13.08.2024.
-
https://www.capital.ro/regulamentul-ia-inteligenta-artificiala-uniunea-europeana.html. Accessed: 13.08.2024.
-
Government Decision no. 832/2024 on the approval of the National Strategy in the field of artificial intelligence 2024-2027. Official Gazette of Romania. No. 730 bis of July 25, 2024, Part I, no. 730Bis Annex to Government Decision no. 832/2024 on the National Strategy in the field of artificial intelligence 2024-2027. Accessed: 13.08.2024.
-
Maliha G, Gerke S, Cohen IG, Parikh RB. Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation. Milbank Q. 2021;99(3):629-647.
-
Ström P, Kartasalo K, Olsson H, et al. Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: a population-based, diagnostic study. Lancet Oncol. 2020;21(2):222-232.
-
Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility?. Front Surg. 2022;9:862322.
-
IBM. https://www.ibm.com/docs/en/announcements/watson-oncology?region=CAN. https://www.ibm.com/us-en/marketplace/ibm-watson-for-oncology. Accessed: 05.09.2024.
-
Zou FW, Tang YF, Liu CY, Ma JA, Hu CH. Concordance Study Between IBM Watson for Oncology and Real Clinical Practice for Cervical Cancer Patients in China: A Retrospective Analysis. Front Genet. 2020;11:200.
-
Jie Z, Zhiying Z, Li L. A meta-analysis of Watson for Oncology in clinical application. Sci Rep. 2021;11(1):5792.
-
Printz C. Artificial intelligence platform for oncology could assist in treatment decisions. Cancer. 2017;123(6):905.
-
Shah M, Naik N, Somani BK, Hameed BMZ. Artificial intelligence (AI) in urology-Current use and future directions: An iTRUE study. Turk J Urol. 2020;46(Supp. 1):S27-S39.
-
Griffin F. Artificial Intelligence and Liability in Health Care. Health Matrix: the Journal of Law-Medicine. 2021;31(Issue1):Article 5. https://scholarlycommons.law.case.edu/healthmatrix/vol31/iss1/5
-
Waddell BS, Padgett DE. Computer Navigation and Robotics in Total Hip Arthroplasty. Orthopaedic Knowledge Update: Hip and Knee Reconstruction. 2018 Jan;5:423-427.
-
Pandav K, Te AG, Tomer N, Nair SS, Tewari AK. Leveraging 5G technology for robotic surgery and cancer care. Cancer Rep (Hoboken). 2022;5(8):e1595.