• Silver Needle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    23 hours ago

    😹 How are we concerned with statistical systems being vulnerable (which is shitty, sure) when they don’t even lead to productivity increases, that is they cannot even do the jobs they’re made to do? Get real. What a clownshow

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      22 hours ago

      Yeah this is what bugs me.

      There are no trade off, there are only disadvantages.

      It’s like a drug that not only it’s bad for you, it’s also not fun to do.

      • Silver Needle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        22 hours ago

        “AIs” can’t even operate vending machines, let alone recognize handwriting reliably or translate text. I know a few people that work in archives with (pre-)medieval manuscripts and I myself have bitten my teeth out on Google Translate™ and DeepL™. That’s how I know. There was also a study done on that vending machine thing. Come to think of it, you could make a simple vending machine that collects usage statistics and sends reports via radio that just works using a few scripts. Emphasis on “works”.

        My my my

        • Eheran@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          What does that have to do with anything? DeepL is fucking amazing. So is OCR. Because there are areas where it does not work or has not been optimized for you think there is no productivity increase at all?

          • Silver Needle@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Google Translate feels more natural even if it’s not as “precise” than DeepL. I wouldn’t rely on it for communication, or any machine translation for that matter.

            As someone who speaks more than two languages I am often dumbfounded by the sheer acceptance of these, I don’t want to call them this, tools.

            Use of this stuff always leads to misunderstandings and inefficiencies down the line because you actually need to comprehend a sequence of words’ meaning in order to translate. But ANNs for translation do not understanding anything. They make a relation from a source to a target of some sort purely by way of statistics. That is basically rolling the dice with weights and patterns of distribution, where how you shake the dice is your input/source and the eyes on top is the output.

            Now for a short lesson in biology. While it is true that synapses are indeed badly approximated by most ANNs, this is the only thing that ANNs really derive from biology with interesting reproducible properties that can be marketed to people who need to offload responsibilities. There is a complete disregard for internal dynamics of cells and dynamics that happen at a scale larger than the synaptic makeup of an organism. We do not really have the means to regard the interactions between organism and environment as objects that shape perception. We still don’t know how a thought forms and how meaning is generated from a perspective that is not purely philosophical, meaning we definitely do not know how this happens at a biological level. Anyone who tells you otherwise is either lying or misinformed. As long as the biological bases aren’t crystal clear, we will never translate effectively.

            A great man of history once said that all science would be superfluous if the outward appearance and the essence of things directly coincided. Of the tens of millions strings of words I’ve heard in my lifetime, this easily ranks as one of the most elegant. Let’s apply this to neuro-“science” in its computerized application. We know very little about the brain. Do you think that whatever devices we make with our current state of knowledge can even come close to what we do as aware beings?

            Again, translation is an involved process that uses every since function of the nervous system. Using statistical methods to very badly approximate our process of reading > contextualizing > imagining > [any step that could be necessary] > output, where reading is followed by vibes and then nothing before outputting will inevitably degrade information. A short paragraph could be handled when you’re aware that Google Translate, etc. is used, but a book, something that happens in a very specific and exact environ like a README file or a manual, or god forbid, political philosophy, leads when put through DeepL to consequences that can’t be foreseen. I think of all the times I had difficulties reading descriptions of items on AliExpress due to their translator use. This is not a productivity gain, this is a degradation of quality that will have to be fixed one day eating up precious time.