Artificial intelligence may impact orthopedics in many ways. It may be used to read X-rays, plan surgery, make diagnosis and "read" charts and literature using natural language processing.
As new as AI appears, AI goes back to Davinci’s "cart" and Babbage’s calculating machines envisioned in the 1800’s. In 1950, Alan Turing wrote a groundbreaking article called "Computing machinery and Intelligence." Turing imagined that computers need not be electronic and that computers may eventually have the equivalent of infinite memory. Turing stated that future machines will "imitate'' human thinking. In the late 1950’s an electronic representation of a human neuron, the perceptron, and by combining them a true artificial intelligent machine was possible. The ideas of Artificial General Intelligence (AGI), a machine that can solve any problem without being programmed and Artificial Super Intelligence (ASI), a machine that had superhuman intelligence, followed. There was little concern for ethical issues as the AI challenges of Chess, Go and Jeopardy progressed. However, now that AI has wide applications in medicine, The Four Basic Principles of Medical Ethics cannot be ignored.
Autotomy is the freedom to choose a treatment and the patient’s ability to decide devoid of coercion.
Justice is the equal application of treatments across the populations and the equal distribution of medical research. For example, we cannot do research on only prisoners and advanced care cannot be offered to only a privileged group (Nierenberg trial reference).
Benefice is the idea that the provided treatment be of benefit to the patient and society.
Malfeasance is the age-old principle of doing no harm to the patient or the society.
AI users (like governments or insurers) may direct us to treatments outside the doctor-patient relationship that may restrict patient Autotomy. We see this in many pre-approval processes where care is delayed or denied based on an algorithm that may prevent an outlier from getting the best care. AI generated hospital clinical pathways may also remove autotomy and disrupt the doctor patient relationship.
Justice can be affected by demographic data. An underserved population in the data may stay underserved because the zip code data may confirm that population does poorly. It may be that the group's general care needs to be improved before the data will come around. The AI will be correct and yet be functionally and ethically wrong.
Benefice may face aggregate data for the society that will impinge the individual’s right to a beneficial option. It may be undervalued in the data based on skewed costs or lack of data for
that specific new treatment in this patient type. Use of prior AI data to find current benefit may inhibit the use of new procedures and stifle innovation. Again, skewed data by hospital pricing that is artificially rigged may alter a Benefice calculation (like the $10 Tylenol or $2,500 MRI when the outpatient charge is a third the price).
Maleficence may occur when the principles above bump into each other. Justice may increase cost above the societal Benefice. Autotomy may demand a marginal MRI when there is a risk of being sued for a missed diagnosis no matter the data. Especially if the personal cost to the physician is the potential upset patient or the time in court for a good medical judgment that turned out to be wrong. Even more often if the local standard of care is to over-test for those same reasons.
In the end, we cannot apply AI while being blind to the ethical implications. There needs to be a human in the loop when we are setting up the application. Biased data will give us biased results and AI will frequent amplify data bias. Cost data needs to be scrutinized for real costs and artificial costs. Zip codes are great for epidemics but may have social income biases built in. No-show studies may say more about access to health care than relative risks for surgery. As AI expands in health care settings each case must have a safety check for ethical use and equitable application of resources.