
ChatGPT is officially resigning its role as our deeply unqualified (but extremely confident) doctor, lawyer, and financial planner.
Driving the news: OpenAI has updated its policies for ChatGPT to curb personalized advice for medical, financial, or legal questions. The move puts the duty on users to avoid asking for tailored advice without consulting a licensed professional.
- For example, if you asked ChatGPT to diagnose your cough, it would now refer you to seek a professional opinion instead of speculating as to what it could be.
Why it’s happening: OpenAI doesn’t want to be legally liable for the mistakes that its chatbot inevitably makes, or how users interpret and apply the advice they’re given.
Why it matters: An alarming number of people have been leaning on chatbots to make consequential decisions. A recent Leger poll found that 52% of Canadians rely on AI for financial guidance, 36% for health advice, and 31% for legal help.
- Chatbots are known to make up legal cases, some of which have been presented in court. Just last month, a Canadian man was fined for inadvertently including AI hallucinations in his legal defence.
- On the healthcare side, a University of Waterloo study found that ChatGPT answered medical diagnostic questions incorrectly nearly two-thirds of the time.
Zoom out: If OpenAI’s policy becomes industry standard, many lawyers and financial advisors are likely breathing a little easier that their jobs are safe (for now), though the move could give less-cautious rivals an opening to compete against the current AI market leader.—LA