• @bacon_pdp@lemmy.world
    link
    fedilink
    English
    -46 days ago

    I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.

    • @chobeat@lemmy.mlOP
      link
      fedilink
      English
      36 days ago

      “alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility

      • @bacon_pdp@lemmy.world
        link
        fedilink
        English
        -16 days ago

        I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.

        None of the “AI” companies are even remotely interested in or working on this legitimate concern.

    • Balder
      link
      fedilink
      English
      16 days ago

      Unfortunately game theory says we’re gonna do it whenever it’s technologically possible.