> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
Also possible: he's unaware of a change implemented elsewhere that (intentionally or unintentionally) has resulted in a change of behaviour in this circumstance.
(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.