• Echo Dot@feddit.uk
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    You’re using AI to mean AGI and LLMs to mean AI. That’s on you though, everyone else knows what we’re talking about.

      • nonfuinoncuro@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        I’ve given up trying to enforce the traditional definitions of “moot”, “to beg the question”, “nonplussed”, and “literally” it’s helped my mental health. A little. I suggest you do the same, it’s a losing battle and the only person who gets hurt is you.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Op is an idiot though hope we can agree with that one.

        Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.

          There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.

          What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.

          Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.

          Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Nobody has yet met this challenge:

      Anyone who claims LLMs aren’t AGI should present a text processing task an AGI could accomplish that an LLM cannot.

      Or if you disagree with my