Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Calif. teen trusted ChatGPT's drug advice. He died from an overdose (sfgate.com)
22 points by freediver 2 hours ago | hide | past | favorite | 23 comments




I skimmed the article, and I had a hard time finding anything that ChatGPT wrote that was all that..bad? It tried to talk him out of what he was doing, told him that it was potentially very fatal, etc. I'm not so sure that it outright refusing to answer and the teen looking at random forum posts would have been better, because they very well might not have told him he was potentially going to kill himself. Worse yet, he could have just taken the planned substances without any advice.

Keep in mind this reaction is from someone that doesn't drink and has never touched marijuana.


I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

This seems like a web problem, not a ChatGPT issue specifically.

I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

Is there an angle here I am not picking up on, do you think?


if it doesn’t know medical advice, then it should say “why tf would i know?” instead it confidently responds “oh, you can absolutely do x mg of y mixed with z.”

these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?

give us all of the money, but also never trust our product.

our product will replace humans in your company, also, our product is dumb af.

subscribe to us because our product has all the answers, fast. also, never trust those answers.


AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.


> highly inaccurate authority.

The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.

Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.


Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.

The different is that OpenAI have much deeper pockets.

I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".


To sue, do you mean? I don't quite understand what you intend to convey. Reddit has moderately deep pockets. A random forum related to drugs doesn't.

Random forums aren't worth suing. Legally, reddit is not treated as responsible for content that users post under section 230, i.e, this battle has already been fought.

On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.

OpenAI _might plausibly_ be responsible for certain outputs.


Ah, I see you added an edit of "I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs"."

I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.


The difference is that those other mediums enable a conversation - if someone gives bad advice, you'll often have someone else saying so.

The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear.

Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.


>"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.


> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

In a forum, it is the actual people who post who are responsible for sharing the recommendation.

In a chatbot, it is the owner (e.g. OpenAI).

But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.


Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.

This brings to mind some of the “darker” subreddits that circle around drug abuse. I’m sure there are some terrible stories about young people going down tragic paths due to information they found on those subreddits, or even worse, encouragement. There’s even the commonly-discussed account that (allegedly) documented their first experiences with heroin, and then the hole of despair they fell into shortly afterwards due to addiction.

But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?


This is a useful question to ask in the context of carriers having specific defence. Also, publishers in times past had specific obligations. Common carrier and safe harbour laws.

I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.


The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.

Took a while to figure out what the OD was of, but it was a combination of alcohol, kratom (or a stronger kratom-like drug), and xanax.

7-O is like kratom in a similar way that fentanyl is like opium, FWIW. It's much, much more potent. That stuff should be banned.

That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.


The article mentions 7-OH also known as feel free, which shockingly hasn't been banned and is sold without checks at many stores. There are quite a few Youtube videos talking about addiction to it and it sounds awful.

https://www.youtube.com/watch?v=TLObpcBR2yw


Sam and Dario "The society can tolerate a few deaths to AI"

  "Don't believe everything you read online".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: