Before a child goes into an operating room, a large screen displays a risk score. This score predicts potential complications, provides an estimated time for recovery, and recommends the course of action. While the numbers appear to be accurate, the process that goes into creating them is harder to see.
Artificial intelligence (AI) is rapidly emerging in the field of paediatric surgery by offering greater accuracy in diagnosis, enhanced planning capabilities for surgery, and reduced amounts of paperwork for the clinician. However, the same technologies that have enabled doctors to streamline care have also introduced a new source of uncertainty for parents and caregivers when making decisions about children who are unable to speak for themselves.
Johns Hopkins All Children’s Hospital’s Division of Paediatric Surgery has published a recent article in the World Journal of Pediatric Surgery on how AI technologies intersect with the traditional ethical principles of medicine. The authors of this paper believe that the ultimate adoption of AI in the field of surgery will be less dependent upon the technical abilities of AI technologies and more dependent upon how AI technologies are monitored and regulated.
The advantages of AI in the operating room (OR) are obvious. For example, machine learning programs have been developed that allow for the identification of surgical risks and complications. Other machine learning systems can interpret imaging studies and predict the likelihood of complications occurring after a complex surgical procedure. There are also applications that are programmed to listen during patient visits and create clinical notes for the surgeon, enabling surgeons to concentrate on the family rather than on a computer screen.

In practice, AI technologies have already had an impact on the way surgical care is provided. The American College of Surgeons’ National Surgical Quality Improvement Program (NSQIP), introduced in 2016, provided risk estimates to the surgeon based on specific patient characteristics using traditional statistical models. Over the past few years, the model has been converted into a machine learning program. It now provides the surgeon with improved capability to capture the interactive nature of variables related to risk and outcomes.
As a result of the introduction of machine learning technology, difficult conversations with families about risk and uncertainty have become easier. Families assessing treatment options for conditions such as biliary atresia or neuroblastoma may benefit from accurate predictions. This allows them to adequately weigh their choices. For surgeons, automated documentation and summary tools decrease the amount of time spent reviewing charts. This activity has been shown to increase burnout rates.
Pediatric care offers a different patient demographic from adult medicine. Children make up a smaller, more diverse group of patients, and their data are often not represented as frequently in larger databases. Consequently, gaps in that data can impact the predictive ability of the technology.
The ethical issue of autonomy begins with the idea that families should make informed decisions regarding a child’s medical care. In pediatric surgery, parents or guardians are responsible for making those decisions, often in very stressful situations. To help alleviate some stress or anxiety during consent discussions, AI tools may provide helpful knowledge. These tools can translate complex medical language into layman’s terms.
Many ideas aim to enhance understanding. Some include creating systems that identify distress signals or levels of family involvement. These systems could then automatically notify physicians when families require additional assistance. However, the authors caution against replacing traditional communication methods with the use of technology.
Families should understand the potential impact of AI technology in diagnosis and treatment for their children. They should also have a say in how their information will be used to train future AI technologies. Not agreeing to train future AI technology should not create barriers to care for families.

The increasingly popular use of surgical robots creates another layer of complications. Many current surgical technology systems are developing capabilities that allow them to operate in a partially autonomous capacity. Lee and colleagues (2018) describe a classification system for how robotic systems can operate with varying degrees of independence, ranging from 1 to 6, based on how much human support is needed.
To date, most robotic systems used in surgery are supervised by humans. This means that surgeons have responsibility for all aspects of the surgery. If fully autonomous devices were approved for use, this would raise questions about who is liable if something goes wrong.
AI provides benefits in promoting positive outcomes and avoiding negative ones in healthcare. Tools already exist that allow AI to identify diagnoses quickly. For example, deep-learning algorithms developed for use during surgery can assist surgeons in identifying Hirschsprung disease faster than current practices. Faster diagnoses allow for shorter operating room times, which benefits the patient.
However, the potential for misdiagnosis from over-reliance on AI-generated results may expose patients to increased risks. These risks include undergoing unnecessary surgery or receiving incomplete treatment. In the case of Hirschsprung disease, the incorrect amount of intestine may be surgically removed. This has potential long-term consequences.
With the advent of AI, the concept of liability or accountability becomes less straightforward. If an AI-assisted robotic device plays a role in a patient complication, the potential for liability may extend beyond the surgeon to include the hospital and manufacturer. The authors suggest that pathways for addressing harm caused by these technologies must be defined prior to widespread adoption.
Another consideration is the principle of justice, which calls for fairness. With respect to pediatric patients, fairness is highly dependent on having access to adequate data. Currently, data pertaining to the pediatric population is underrepresented in many datasets used to develop medical imaging products. This makes it less likely that algorithms will perform well for pediatric patients. In the case of appendicitis, this may lead to delays in diagnosis and increased complications.
Another area of concern regarding AI and equity is geographical variability. Many machine learning algorithms are trained using data from only a limited number of states. This minimizes their relevance across multiple geographic locations. Collecting representative samples from underrepresented groups or altering model outputs to better reflect geographic distribution have been explored as potential solutions. Each solution, however, includes its own drawbacks.
Expanding datasets through the inclusion of previously unutilized sources creates new challenges regarding data privacy. There has been an increase in cybercrime and data breaches associated with larger and more centralized datasets. This may subject pediatric patients to greater risks due to long-term tracking of their medical records.
One way to establish accountability for AI systems is to improve their ability to provide transparency into their internal decision-making processes. AI systems that have previously been viewed as “black boxes” present a challenge to healthcare professionals. Clinicians can see both the input and output of the system, but not the reasoning behind its recommendations.
The lack of transparency makes it difficult for healthcare workers to explain AI-driven decisions to patients and their caregivers. It also makes it harder to identify and resolve potential errors. Explainable AI may address this issue by improving transparency in decision-making.
However, the authors express concern that over-reliance on automated recommendations could lead to a decline in clinicians’ critical thinking skills. The ability to access AI-assisted healthcare also remains a challenge. This is due to both the growing use of digital healthcare tools and limited access to reliable internet and digital literacy.
The continued expansion of AI in healthcare may worsen existing disparities. This is especially true for populations without access to internet-based tools or sufficient digital literacy.
Ultimately, the authors conclude that AI-assisted devices in pediatric surgical practice should support human decision-making rather than replace it. Surgeons must remain actively engaged in the development and evaluation of these technologies.
Pediatric surgery is at a critical point as AI capabilities continue to evolve rapidly. At the same time, systems for regulation and oversight are still being developed.
Research findings are available online in the journal World Journal of Pediatric Surgery.
The original story “AI is transforming pediatric surgery, but with strong ethical concerns” is published in The Brighter Side of News.
Like these kind of feel good stories? Get The Brighter Side of News’ newsletter.
The post AI is transforming pediatric surgery, but with strong ethical concerns appeared first on The Brighter Side of News.
Leave a comment
You must be logged in to post a comment.