• lefthandeddude@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    3
    ·
    edit-2
    7 days ago

    The elephant in the room that no one talks about is that locked psychiatry facilities treat people so horribly and are so expensive, and psychologists and psychiatrists have such arbitrary power to detain suicidal people, that suicidal people who understand the system absolutely will not open up to professional help about feeling suicidal, lest they be locked up without a cell phone, without being able to do their job, without having access to video games, being billed tens of thousands of dollars per month that can only be discharged by bankruptcy. There is a reason why people online have warned about the risks and expenses of calling suicide hotlines like 988 that regularly attempt to geolocate and imprison people in mental health facilities, with psychiatric medications being required in order for someone to leave.

    The problem isn’t ChatGPT. The problem is a financially exploitative psychiatric industry with horrible financial consequences for suicidal patients and horrible degrading facilities that take away basic human dignity at exorbitant cost. The problem is vague standards that officially encourage suicidal patients to snitch on themselves for treatment with the consequence that at the professional’s whim they can be subject to misery and financial exploitation. Many people who go to locked facilities come out with additional trauma and financial burdens. There are no studies about whether such facilities traumatize patients and worsen patient outcomes because no one has a financial interest in funding the studies.

    The real problem is, why do suicidal people see a need to confide in ChatGPT instead of mental health professionals or 988? And the answer is because 988 and mental health professionals inflict even more pain and suffering upon people already hurting in variable randomized manner, leading to patient avoidance. (I say randomized in the sense that it is hard for a patient to predict the outcome of when this pain will be inflicted, rather than something predictable like being involuntarily held every 10 visits.) Psychiatry and psychology do everything they possibly can to look good to society (while being paid), but it doesn’t help suicidal people at all who bare the suffering of their “treatments.” Most suicidal patients fear being locked up and removed from society.

    This is combined with the fact that although lobotomies are no longer common place, psychiatrists regularly push unethical treatments like ECT which almost always leads to permanent memory loss. Psychiatrist still lie to patients and families regarding ECT about how likely memory loss is, falsely stating memory loss is often temporary and not everyone gets it, just like they lied to patients and families about the effects of lobotomies. People in locked facilities can be pressured into ECT as part of being able to leave a facility, resulting in permanent brain damage. They were charlatans then and now, a so called “science” designed to extract money while looking good with no rigorous studies on how they damage patients.

    In fact, if patients could be open about being suicidal with 988 and mental health professionals without fear of being locked up, this person would probably be alive today. ChatGPT didn’t do anything other than be a friend to this person. The failure is due to the mental health industry.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 days ago

      While I agree with much of what you said, there are other issues with psychology and psychiatry that they often can’t treat some environmental causes or triggers. When I was suicidal, it was also the feeling of being trapped in a job where I wasn’t appreciated and couldn’t advance.

      If I were placed in an inpatient facility, it would only have exacerbated the issues where I would have so much to deal with the try and be on medical leave before I got fired for not showing up.

      That said, for SOME mental illnesses ECT it can be a valid treatment. We don’t know how the brain works, but we’ve seen correlation where ECT kind of resets the way the brain perceives the world temporarily. All medical decisions need to be weighed against the side effects and determined if the benefits outweigh the risks.

      The other issue with inpatient facilities is that they can be incredibly hard to convince the staff that you are doing better. All actions are viewed through the lens that you are ill and showing the staff you are better is just trying to trick the staff to get out.

      • lefthandeddude@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        7 days ago

        You’re wrong about ECT. It nearly always results in permanent memory loss and even if occasionally some patients seem “better” because they remember less of their lives, it does not negate the evil of the treatment. Worse than that, psychiatrist universally deceive patients about the risk of memory loss, saying memory loss is temporary, when most patients who have had ECT report that the memory loss is permanent. There were people who extolled the virtues of lobotomies decades ago and the procedure even won a Nobel Prize. The reason it won a Nobel Prize is because patient experiences mean nothing compared to the avarice of a psuedoscientific discipline that is always looking for the next scam, with the worst most cruel and most expensive scams always inflicted on the most vulnerable. It is hard and traumatic for patients who have been exploited by their supposed “healers” to come forward with the truth. It is incredibly psychologically agonizing to admit to being duped. Patients are not believed then or now. You are completely wrong.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 days ago

      God this. Before I was stupid enough to reach out to a crisis line, I had a job with health insurance. Now I have worsened PTSD and no health insurance (the psych hospital couldn’t be assed to provide me with discharge papers.) I get to have nightmares for the rest of my life about a three men shoving me around and being unable to sleep for fear of being assaulted again.

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      edit-2
      7 days ago

      Systematic reviews bear out the ineffectiveness of crisis hotlines, so the reason they’re popularly touted in media isn’t for effectiveness. It’s so people can feel “virtuous” & “caring” with their superficial gestures, then think no further of it. Plenty of people who’ve attempted suicide scorn the heightened “awareness” & “sensitivity” of recent years as hollow virtue signaling.

      Despite the expertly honed superficiality on here, chatgpt is not about to dissuade anyone to back out of their plans to commit suicide. It’s not human, and if it tried, it’d probably piss people off who’ll turn to more old-fashioned web searches & research. People are entitled to look up information: we live in a free society.

      If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.

      The problem is a financially exploitative psychiatric industry with horrible financial consequences for suicidal patients and horrible degrading facilities that take away basic human dignity at exorbitant cost.

      You’re staying at an involuntary hotel with room & board, medication, & 24-hour professional monitoring: shit’s going to cost. It’s absolutely not worth it unless it’s a true emergency. Once the emergency passes, they try to release you to outpatient services.

      The psychiatric professionals I’ve met take their jobs quite seriously & aren’t trying to cheat anyone. Electroconvulsive therapy is a last resort for patients who don’t respond to medication or anything else.

      • QueenHawlSera@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.

        I used to be suicidal. I am grateful I never succeeded. You are a monster if you think we should just let people kill themselves.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      I’d also like to point out that people these days are far more isolated than we have ever been. Cell phones make it far to easy to avoid social interaction.

  • falseWhite@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    6 days ago

    arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

    “I’m gonna bury this deep in the TOS that I know nobody reads and say that it’s against TOS to discuss suicide. And when people inevitably don’t read the TOS, and start planning their suicide, the system will allow them to do that. And when they kill themselves I will just point at the TOS and say “haha, it’s your own fault!””. I AM A GENIUS" - Sam Altman

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 days ago

    Plenty of judges won’t enforce a TOS, especially if some of the clauses are egregious (e.g. we own and have unlimited use of your photos )

    The legal presumption is that the administrative burden of reading a contract longer than King Lear is too much to demand from the common end-user.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    As shitty as AI is for counseling, the alternative resources are so few, unreliable, and taboo that I can’t blame people for wanting to use it. People will judge and remember you. AI affirms and forgets. People have mandatory reporting for “self harm” (which could include things like drug usage) that incarcerates you and fucks up your life even more. AI does not. People are varied with differing advice, while AI uses the same models in different contexts. Counselors are expensive, AI is $20/mo. And lastly, people have a tendency to react fearfully to taboo topics in ways that AI doesn’t. I see a lot of outrage towards AI, but it seems like the sort of outrage that led to half-assed liability-driven “call this number and all of your problems will be solved” incarceration and abandonment hotlines is what got us here to begin with.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    Fun fact: you can literally go to prison in the US for breaking ToS due to various laws like CFFA (Computer Fraud and Abuse Act). So if the teen broke the ToS to any way that harms OpenAI (like killing himself) OpenAI actually has a legal path to criminally prosecute him lmao

    The entire law stack is just broken.

  • Smoogs@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    7 days ago

    Didnt we just shake the stigma of “committing” suicide to be death by suicide to stop blaming dead people already?

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    “Person violated the TOS when they used the magic lamp to make the genie do bad things.”

    You still made the magic lamp and the genie capable of doing those bad things. That’s the thing with intelligence, even the artificial variety. A chainsaw isn’t going to get up and begin a chainsaw massacre just because you throw the right prompt injection at it. It may just reply with words, but words have power.

  • wavebeam@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 days ago

    Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.

      • espentan@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Well, such a knife’s primary purpose is to help with preparing food while the gun’s primary purpose is to injure/kill. So one would be used for something which it was not designed while the other would’ve been used exactly as designed.

        • Manifish_Destiny@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          7 days ago

          Guns primary purpose is to shoot bullets. I can kill just as well with a chemical bomb as a gun, and I could make both of those from things I can buy from the store from components that weren’t ‘designed’ for it.

          In this case ‘terms of service’ is just ‘the law’.

          People killing each other is just a side effect of humans interacting with dangerous things. Granted humans just kinda suck in general.

          • freddydunningkruger@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 days ago

            Someone programmed/trained/created a chatbot that talked a kid into killing himself. It’s no different than a chatbot that answers questions on how to create explosive devices, or make a toxic poison.

            If that doesn’t make sense to you, you might want to question whether it’s the chatbot that is mindless.

  • Az_1@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    6 days ago

    Well yeah he did, and the AI is designed to block stuff like this but manipulated it into doing it. I’m pretty sure the parents want a nice lump sum from Openai for his son’s death

  • Credibly_Human@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    10
    ·
    6 days ago

    The sentiment that the AI bares any noteworthy responsibility for this is purely anti AI rage, that should be aimed at legitimate problems.

    Imagine suing a notebook company for their paper being the paper of choice for selfharming teens?

    Imagine suing home depot for selling rope and a stool to someone who has had enough?

    Imagine suing nickleback for making music of the quality that encouraged this?

    Im saying, we’re all aware this is some bits on a server right? Like this is clearly not a person, doesn’t have the impact of a person, and unless they’ve specifically tuned it to manipulate the impressionable into killing people, these sentiments just don’t make sense.

    • CovfefeKills@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      Fuck personal responsibility I want to be able to do anything and everything AND sue when I am not safe guarded from myself but also privacy!

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      6 days ago

      I agree the AI hate is becoming a satire of itself. What could be an interesting, meaningful discussion is impossible to have because anti AI peoppe just yell with their ears covered.

  • Realspecialguy@lemmy.worldBanned
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    6 days ago

    He violated the “im under 20 and an adult clause”

    Mainly because 18 and 19 (and 20) aren’t real adults yet.

    Personal anecdot: I was 19 at a house party, my house. I got too drunk and had to go pass out. This 17 year old wanted my beautiful handsome boy body. She snuck into where I went to sleep and she put the moves on me. I tried telling her no and pushing her away, but also, I was only a drunk horny teen.

    Honestly, by standard measure, she raped me. And thats not the only time… but as a guy, who do I tell I was supposedly raped?

    • FryHyde@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 days ago

      I’m really struggling to find the connecting thread between this article and your weird statutory story.

  • lmmarsano@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    13
    ·
    7 days ago

    Teen wanted out. They get information they wanted online. Planet better off.

    There’s no problem here, only parental failure & buttmad pearl clutching.