• @phorq@lemmy.ml
        link
        fedilink
        Español
        208 months ago

        I wouldn’t call them passive, they do too much work. More like aggressively submissive.

        • @GregorGizeh@lemmy.zip
          link
          fedilink
          19
          edit-2
          8 months ago

          Maliciously compliant perhaps

          They do what you tell them, but only exactly what and how you tell them. If you leave any uncertainty chances are it will fuck up the task

      • @BeigeAgenda@lemmy.ca
        link
        fedilink
        88 months ago

        My experience is that: If you don’t know exactly what code the AI should output, it’s just stack overflow with extra steps.

        Currently I’m using a 7B model, so that could be why?

  • IWantToFuckSpez
    link
    fedilink
    23
    edit-2
    8 months ago

    Yeah but have you ever coded shaders? That shit’s magic sometimes. Also a pain to debug, you have to look at colors or sometimes millions of numbers trough a frame analyzer to see what you did wrong. Can’t program messages to a log.

  • @penquin
    link
    68 months ago

    Me looking at my unit tests failing one after another.