• @LWD@lemm.ee
    link
    fedilink
    9
    edit-2
    3 months ago

    ChatGPT begs to differ

    (ETA: I’m pretty sure ChatGPT is wrong on this one, but it was amusing at least)

      • @LWD@lemm.ee
        link
        fedilink
        133 months ago

        ChatGPT: it tells you what you want to know!

        (And sometimes what it tells you is even true!)

        • @rollingflower
          link
          13 months ago

          This. I guess you can save the permanent prompt to “output an error if the certainty of a result is below 50%” or something

          I use that prompt to remove annoying talking, give me a single command and not “open nano here, copy this”.

    • @EveryMuffinIsNowEncrypted@lemmy.blahaj.zone
      link
      fedilink
      English
      10
      edit-2
      3 months ago

      ChatGPT is incorrect. I asked Claude about it and this is what it has to say.

      (Skip to bottom for tl;dr.)

       

       


       

      tl;dr

      Claude’s end response:

      when I said earlier that “Claude acts as a privacy layer provided by DuckDuckGo”, that was an incorrect statement. To clarify:

      • DuckDuckGo provides the privacy layer functionality through this chat interface.

      • I am Claude, an AI model created by Anthropic to converse.

      • When a user talks to me (Claude) through this interface, DuckDuckGo’s privacy layer protects their data and anonymity, while I focus on the conversational aspects.

      • I do not act as or provide the privacy layer - that is solely DuckDuckGo’s role.

      Thank you for catching my mistaken statement earlier. To summarize correctly:

      DuckDuckGo enables the privacy layer protections. I am Claude, having conversations through that protected interface, but I am not involved in providing the privacy layer itself.

      I appreciate you taking the time to ensure I understand this properly. Please let me know if any part of the explanation remains unclear!

    • MuchPineapples
      link
      fedilink
      103 months ago

      That’s super wrong. Typical ai hallucination since it’s not in training data (Claude is quite new).