Joseph H. Schwab, MD, MS, cautioned about security and privacy risks associated with generative artificial intelligence tools during the symposium titled “Generative Artificial Intelligence—A Technology Set to Transform Musculoskeletal Care?” at the AAOS 2024 Annual Meeting.


Published 3/8/2024
Leah Lawrence

Generative AI Could Transform Healthcare

In a time where technological advancements continue to shape the landscape of healthcare, artificial intelligence (AI) is poised to have a profound effect.

“AI can truly revolutionize the way we deliver care to patients and help achieve the goal of improving musculoskeletal health of our population,” said AAOS Past President Kevin J. Bozic, MD, MBA, FAAOS.

Dr. Bozic, inaugural chair of the Department of Surgery and Perioperative Care at Dell Medical School at the University of Texas at Austin, moderated the President’s Symposium at the 2024 AAOS Annual Meeting alongside Prakash Jayakumar, MD, PhD, assistant professor of surgery and perioperative care and director of value-based healthcare and outcome measurement innovations at Dell Medical School.

The symposium, titled “Generative Artificial Intelligence—A Technology Set to Transform Musculoskeletal Care?” was an exploration of generative AI and its risks, challenges, and opportunities in the healthcare arena. It included insights from several experts to help attendees navigate these topics.

AI introduction
Attendees received an introduction to the basics and paradigms of AI by Justin Krogue, MD, a clinical scientist with Google and an orthopaedic surgeon at the University of California, San Francisco. AI can be defined as the effort to automate intellectual tasks normally performed by humans—a definition that Dr. Krogue acknowledged is intentionally broad because there are many approaches to achieve AI. They include traditional machine learning (ML), which is shallow and has only a few computations between inputs and predictions, and deep learning (DL) and neural networks, which have many intermediate steps of computation between inputs and predictions. DL models are often more applicable in the healthcare sector because in medicine, not all associations are linear.

Dr. Krogue explained important differences between narrow AI and generative AI using an example. With narrow AI, someone could show a model 1,000 images of dogs and hot dogs and then ask the model to discern one from the other in a new series of photos.

“For generative AI, after giving it lots of things to read that are not about hot dogs, I could ask it to tell me about hot dogs and dogs and to do it in German,” Dr. Krogue said. “It has a flexible output.”

There have already been some applications of narrow AI using DL and neural networks in orthopaedics, and future applications are broad. Generative AI, sometimes labeled large language models (LLMs), has the potential for one model to be utilized to perform a large variety of medical tasks, even ones it was not trained on. Unfortunately, both narrow AI and generative AI still have relevant limitations.

Approach with caution
“All of us use AI all of the time,” said Joseph H. Schwab, MD, MS, director of the Center for Surgical AI in the Department of Computational Biomedicine and director of spinal oncology in the Department of Orthopaedic Surgery at Cedars Sinai Hospital, Los Angeles. “AI sounds good, but in healthcare you can imagine some challenges. Things have gone wrong and will continue to go wrong.”

First among the possible challenges for AI are concerns about patient privacy. Dr. Schwab used ChatGPT, the chatbot developed by OpenAI based on an LLM, as an example.

“When you click ‘I Agree’ on [apps], that is the same as signing a contract,” Dr. Schwab said. What are they going to do with user data? In the ChatGPT privacy policy, it lists that data are used to conduct research, to develop new programs, and to carry out business transfers, among other things.

Dr. Schwab expressed concern that patients may already be taking their personal health data and entering it into programs such as ChatGPT to answer questions or receive guidance.

The possibility for bias is also another well-known limitation of generative AI. Big data are often used to train algorithms, but most data are imbued with historical patterns and norms that may have existing problems. Generative AI has also been known to provide answers that seem plausible but are not true.

“LLMs will be a part of healthcare in the future,” Dr Schwab said. “But we can’t embrace LLMs at scale until we figure out some of these issues.”

Technology has already changed many aspects of how people work and live, from communication and collaboration, to shopping and retail, to transportation and mobility, according to Joyce J. Shen, MBA, a venture capital/private equity investor and business builder with technical expertise regarding emerging technologies, ML/AI, data economy, and cybersecurity.

“Because of some of the challenges that have been discussed, healthcare is the last bastion that AI/ML have not fully penetrated, but there are huge opportunities,” Ms. Shen said. “There is huge spending and lots of consolidation from the private sector to try to disrupt the industry.”

During the past few years, as she was helping her mother and taking her to many healthcare appointments, Ms. Shen saw for herself the potential applications for AI within healthcare.

“I have seen doctors write notes, delegate notes with an assistant, some dictate notes into ear pods,” Ms. Shen recalled. “With generative AI agents using LLM to build virtual agents, I suspect a lot of these tasks will go away in the next few years.”

Generative AI agents could take over the role of dictating or writing notes and could be incorporated into practices to help schedule appointments and possibly imaging. She hoped that the symposium will stimulate attendees to think about innovative opportunities to use AI/ML in their own organizations.

Hype or hope?
The final speaker of the event was Jacobien Oosterhoff, MD, PhD, assistant professor in AI for healthcare systems at Delft University of Technology, Netherlands. Dr. Oosterhoff discussed the integration of AI in orthopaedics from a systems perspective. She acknowledged that there is fear around the safety of AI, and that fear is not necessarily a bad thing.

Dr. Oosterhoff recalled a quote from Elon Musk: “If things are not failing, you are not innovating enough.” In some ways, that saying may be true, Dr. Oosterhoff noted, but unlike Musk, who can crash rockets in the ocean, physicians cannot do the equivalent of crashing a rocket inside a hospital or a patient.

There are professionals within orthopaedics who know how to develop AI models and validate them in independent populations, but the field still does not know how to move toward implementation, Dr. Oosterhoff said. This premise was supported by a recent systematic review that focused on ML models in orthopaedic trauma that counted 45 models existing worldwide, none of which had made it to implementation.

Hope remains that AI will support healthcare in the future in both clinical decision support and operational tasks, according to Dr. Oosterhoff, but there is also hype in that AI lacks trustworthiness, contains bias, and will have education gaps.

Leah Lawrence is a freelance medical writer for AAOS Now.


  1. Dijkstra H, van de Kuit A, de Groot T, et al: Systematic review of machine-learning models in orthopaedic trauma. Bone Jt Open. 2024;5(1):9-19.