ChatGPT-generated health advice becomes less reliable when it is infused with evidence from the internet, according to a world-first Australian study that highlights the risk of relying on large language models for answers. Research led by the CSIRO and the University of Queensland (UQ) found that while the chatbot handles simple, question-only prompts relatively well,...
The post Not what the doctor ordered: ChatGPT health advice left wanting appeared first on InnovationAus.com.
This post was originally published on InnovationAus.com.