We expect medical professionals to give us reliable information about ourselves and potential treatments so that we can make informed decisi...
- Advertisement -
We expect medical professionals to give us reliable information about ourselves and potential treatments so that we can make informed decisions about which (if any) medicine or other intervention we need.
If your doctor instead “bullshits” you (yes – this term has been used in academic publications to refer to persuasion without regard for truth, and not as a swear word) under the deception of authoritative medical advice, the decisions you make could be based on faulty evidence and may result in harm or even death.
Bull.shitting is distinct from lying – liars do care about the truth and actively try to conceal it. Indeed bullshitting can be more dangerous than an outright lie.
Bull.shitting is distinct from lying – liars do care about the truth and actively try to conceal it. Indeed bullshitting can be more dangerous than an outright lie.
Fortunately, of course, doctors don’t tend to bul.lshit – and if they did there would be, one hopes, consequences through ethics bodies or the law. But what if the misleading medical advice didn’t come from a doctor?
By now, most people have heard of ChatGPT, a very powerful chatbot. A chatbot is an algorithm-powered interface that can mimic human interaction. The use of chatbots is becoming increasingly widespread, including for medical advice.
In a recent paper, we looked at ethical perspectives on the use of chatbots for medical advice.
By now, most people have heard of ChatGPT, a very powerful chatbot. A chatbot is an algorithm-powered interface that can mimic human interaction. The use of chatbots is becoming increasingly widespread, including for medical advice.
In a recent paper, we looked at ethical perspectives on the use of chatbots for medical advice.
Medical Chatbots: Three Ways to Avoid Misleading Information |
Now, while ChatGPT, or similar platforms, might be useful and reliable for finding out the best places to see in Dakar, to learn about wildlife, or to get quick potted summaries of other topics of interest, putting your health in its hands may be playing Russian roulette: you might get lucky, but you might not.
This is because chatbots like ChatGPT try to persuade you without regard for truth. Its rhetoric is so persuasive that gaps in logic and facts are obscured. This, in effect, means that ChatGPT includes the generation of bull.shit.
The gaps
The issue is that ChatGPT is not really artificial intelligence in the sense of actually recognising what you’re asking, thinking about it, checking the available evidence, and giving a justified response. Rather, it looks at the words you’re providing, predicts a response that will sound plausible and provides that response.
This is somewhat similar to the predictive text function you may have used on mobile phones, but much more powerful. Indeed, it can provide very persuasive bullshit: often accurate, but sometimes not. That’s fine if you get bad advice about a restaurant, but it’s very bad indeed if you’re assured that your odd-looking mole is not cancerous when it is.
Another way of looking at this is from the perspective of logic and rhetoric. We want our medical advice to be scientific and logical, proceeding from the evidence to personalised recommendations regarding our health. In contrast, ChatGPT wants to sound persuasive even if it’s talking bullshit.
For example, when asked to provide citations for its claims, ChatGPT often makes up references to literature that doesn’t exist – even though the provided text looks perfectly legitimate. Would you trust a doctor who did that?
The issue is that ChatGPT is not really artificial intelligence in the sense of actually recognising what you’re asking, thinking about it, checking the available evidence, and giving a justified response. Rather, it looks at the words you’re providing, predicts a response that will sound plausible and provides that response.
This is somewhat similar to the predictive text function you may have used on mobile phones, but much more powerful. Indeed, it can provide very persuasive bullshit: often accurate, but sometimes not. That’s fine if you get bad advice about a restaurant, but it’s very bad indeed if you’re assured that your odd-looking mole is not cancerous when it is.
Another way of looking at this is from the perspective of logic and rhetoric. We want our medical advice to be scientific and logical, proceeding from the evidence to personalised recommendations regarding our health. In contrast, ChatGPT wants to sound persuasive even if it’s talking bullshit.
For example, when asked to provide citations for its claims, ChatGPT often makes up references to literature that doesn’t exist – even though the provided text looks perfectly legitimate. Would you trust a doctor who did that?
Dr ChatGPT vs Dr Google
Now, you might think that Dr ChatGPT is at least better than Dr Google, which people also use to try to self-diagnose.
In contrast to the reams of information provided by Dr Google, chatbots like ChatGPT give concise answers very quickly. Of course, Dr Google can fall prey to misinformation too, but it does not try to sound convincing.
Using Google or other search engines to identify verified and trustworthy health information (for instance, from the World Health Organization) can be very beneficial for citizens. And while Google is known for capturing and recording user data, such as terms used in searches, using chatbots may be worse.
Beyond potentially being misleading, chatbots may record data on your medical conditions and actively request more personal information, leading to more personalised – and possibly more accurate – bullshit.
Therein lies the dilemma. Providing more information to chatbots may lead to more accurate answers, but also gives away more personal health-related information. However, not all chatbots are like ChatGPT. Some may be more specifically designed for use in medical settings, and advantages from their use may outweigh potential disadvantages.
What to do
So what should you do if you’re tempted to use ChatGPT for medical advice despite all this bullshit? The first rule is: don’t use it.
But if you do, the second rule is that you should check the accuracy of the chatbot’s response – the medical advice provided may or may not be true. Dr Google can, for instance, point you in the direction of reliable sources. But, if you’re going to do that anyway, why risk receiving bullshit in the first place?
The third rule is to provide chatbots with information sparingly. Obviously, the more personalised data you offer, the better the medical advice you get. And it can be difficult to withhold information as most of us willingly and voluntarily give up information on mobile phones and various websites anyway.
So what should you do if you’re tempted to use ChatGPT for medical advice despite all this bullshit? The first rule is: don’t use it.
But if you do, the second rule is that you should check the accuracy of the chatbot’s response – the medical advice provided may or may not be true. Dr Google can, for instance, point you in the direction of reliable sources. But, if you’re going to do that anyway, why risk receiving bullshit in the first place?
The third rule is to provide chatbots with information sparingly. Obviously, the more personalised data you offer, the better the medical advice you get. And it can be difficult to withhold information as most of us willingly and voluntarily give up information on mobile phones and various websites anyway.
Adding to this, chatbots can also ask for more. But more data for chatbots like ChatGPT could also lead to more persuasive and even personalised inaccurate medical advice. Talking bullshit and misuse of personal data is certainly not our idea of a good doctor. - The Conversation
- Advertisement -
- Advertisement -
Tinzwei Is A Worth Voyage For Those In Pursuit For Up-To-Date World Events.
Read More At The Online Coronavirus Portal Or Use The 24-Hour Public Hotline:
South Africa: 0800 029 999 or just Send Hie to 0600 123 456 on WhatsApp
No comments