
Editor’s note: This is the first in a series of articles about artificial intelligence and its potential in orthopaedics.
It seems we hear daily about advances in artificial intelligence (AI)—from self-driving cars, face recognition, and hacker protection to stock portfolios and more. This article provides a primer for those who have limited knowledge of AI, its history, and the newest possibilities.
The idea that a machine can think like a human and make decisions based on self-made rules rather than instructions has been the subject of many science fiction novels. In a short story in 1942—later a published novel in December 1950 titled I, Robot—Isaac Asimov proposed the following three universal rules to protect humans from free-thinking AI robots:
- “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
- “A robot must obey orders given by a human being unless it conflicts with the First Law.”
- “A robot must protect its own existence as long as such protection won’t conflict with the First or Second Laws.”
These rules were intended to help humans feel more comfortable with decision-making robots. They are not too far removed from today’s concerns about what a self-driving car would do when faced with a choice of hitting a school bus or risking injury to the car’s owner. The car may be deciding in a thousandth of a second who will be our next patient. Although AI clearly has huge potential, it also poses ethical and physical dilemmas.
Where did it start?
In the 1940s and 1950s, mathematician Alan Turing (famously portrayed in the 2014 movie “The Imitation Game”) led the MI6 team that cracked the “unbreakable” code to the German Enigma coding machine during World War II, as well as created a test for a machine’s apparent intelligence when interacting with a human. The “Turing Test” assesses a human’s ability to determine a difference between a human response and a machine’s actions. If a blinded person could not tell the difference between the two, the machine passed the Turing Test. Mr. Turing predicted machines would be able to do this in time.
AI did not become a discipline until the late 1950s, when Arthur Samuel coined the term machine learning. He developed a checkers-playing program designed for IBM 701, the company’s first commercial computer, which was one of the first self-learning programs with human interactions. AI then fell out of favor due to a lack of funding, but now AI machines have taken on humans in chess, Go, and Jeopardy!—and each time the computer can beat the best.
How does it work?
When in high school in the mid-1970s, Alan Reznik (now MD, MBA, FAAOS) had access to a good computer. It had a whopping 8K total memory and less than 1/100 of a megabyte, it could be programmed in only one language (BASIC), there were no graphics, and it had a black and white monitor that showed 20 lines of text 40 characters long. At that time, the challenge was to create a program to play tic-tac-toe.
One option was to program in the well-known expert strategy and let the computer play. It would be very good, even unbeatable. That solution was a programmed expert system—smart but not really AI. Instead, the computer was programmed to understand what it needed to do to play: Pick one of the nine boxes remaining after the last turn; know that three in a row wins; remember the mistakes it made; and do not make the same moves again.
The computer’s owner used the second strategy and then asked the computer to play itself over and over, using the same program as its only opponent. After he let it play 10,000 games, the computer developed its own pattern of mistakes to avoid. It was smarter after each game. Next, he let a human play the computer; the human did not know what the computer knew. The computer won or tied every game as an expert human would. That learning by experience is the basic idea behind one part of AI.
Another part may be considered a grouping problem. There are two groups of data that need to be separated—like the game on Sesame Street, one of these things is not like the others. Sometimes the differences are big; other times, they are small. The data can be as simple as blue Xs and red Os (Fig. 1) or as complex as separating male and female faces based on image recognition (Fig. 2). Fig. 1 is more representative of a more defined group of differences, whereas Fig. 2 may approach the complexity of separating male and female faces by images alone.
Courtesy of Kenneth Urish, MD, PhD
Courtesy of Kenneth Urish, MD, PhD
The same is true for many AI systems—a computer analyzes and records patterns to make inferences from its own experience and solve problems in ways it decides are best. Much more sophisticated programs learn to look ahead several moves in a game and learn how to resolve conflicts in the data. AI machines can do this equally well with images, data, written words, medical publications, and car movements.
Why are we more concerned with medical AI?
In medicine, we cannot let a system make mistakes on living people to learn from them. However, AI can learn from prior data and draw knowledge from cases, for example, and use those outcomes to help healthcare professionals analyze future cases. This is what IBM Watson does with cancer teams at the Mayo Clinic. Watson suggests treatment options after digesting and analyzing the literature; in 2018, it started matching patients with breast cancer to clinical trials. At times, Watson has presented options or clinical trials that the team did not previously consider. Other systems are combining genomic information with data from 130,000 cases per year at Memorial Sloan Kettering Cancer Center. Such information is one way to improve physician IQ in medical decision-making.
AI and orthopaedics
There is no doubt AI will find its way into our specialty, if it hasn’t already. We have countless treatment algorithms, classification systems, outcomes databases, and myriad complications to reduce. We are currently developing big data through registries like the American Joint Replacement Registry (AJRR), and as we look to find a useful home for AI in orthopaedics, AJRR could be one place to start. AI could be used to screen radiographs for subtle abnormalities, back up an emergency department doctor’s nighttime fracture readings with a machine-learning-based second opinion, or follow a bone tumor’s response to chemotherapy. Kenneth Urish, MD, PhD, is currently working on AI as a way to evaluate MRI data to detect osteoarthritis and track cartilage loss over time. This approach may have many implications as we evaluate the usefulness of treatments like lubricants, platelet-rich plasma, and stem cells, as well as medical treatments for inflammatory arthropathies.
Future articles in this series will dive more deeply into the most common AI capabilities, including data mining, pattern recognition, statistical modeling, neural networks, and new developments in data science. We will look at human data; medical records; digital information on radiographs, CT, or MRI; and how new strategies are being developed today. Humans are bound by language and current knowledge. AI, in contrast, learns without prejudice.
Alan M. Reznik, MD, MBA, FAAOS, specializes in sports medicine and arthroscopic surgery and serves on the AAOS Now Editorial Board, AAOS Communications Cabinet, and Committee on Research and Quality. Dr. Reznik is chief medical officer of Connecticut Orthopaedic Specialists, associate professor of orthopaedics at Yale University School of Medicine, and a consultant.
Kenneth Urish, MD, PhD, is an assistant professor in the Department of Orthopaedic Surgery at the University of Pittsburgh and associate medical director at the Bone and Joint Center at Magee-Womens Hospital of the University of Pittsburgh Medical Center.