I recently asked a question to ChatGPT, the artificial intelligence (AI) chatbot launched late last year. I got a well-worded response that made a lot of sense. It also agreed with my first thought on what the answer to the question would be.
I might have even used the response to make a decision that impacted my family’s finances. Just to be on the safe side, I looked up the applicable laws in the State of California. The issue related to real estate, so I also reached out to a real-estate lawyer in California.
What was the final outcome? That well-written response from ChatGPT was completely wrong. Had I finalized my decision for my family to resolve a current dispute, it would have hurt them and made them look uninformed.
So, what happens when people start using AI to make medical decisions?
Many companies and physicians would like to start billing for “Software as a Service” (SaaS) in healthcare.
The levels would go from telehealth services to potentially providing diagnostic and treatment services, using only AI, with no “attending physician.”
Aside from SaaS, I am concerned that in the same way, a CPA could reach out to AI to answer a question, a physician could use AI to do the same – and get a completely erroneous answer.
I do recognize that in the same way self-driving cars could get in accidents, on the whole, self-driving cars are safer than cars driven by humans.
It could be argued that even if AI gives a wrong result to a question that injures a patient, the number of times it prevents a physician from reaching an incorrect diagnosis may far outweigh the number of times it is wrong.
I am also concerned that AI could help people impersonate physicians, with unscrupulous individuals using it to provide reasonable-sounding diagnostic services and treatment plans to an unsuspecting public.
Almost certainly, patients will use AI to self-diagnose their medical conditions.
Welcome to a new world.