But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    15 hours ago

    It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.

    • ryven@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      13 hours ago

      Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doesn’t know how it will end, and therefore can’t have an opinion about the truth value of it. (I’d go further and claim it can’t really “have an opinion” about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.

      “Admitting” that it’s lying only proves that it has been exposed to “admission” as a pattern in its training data.

      • ggppjj@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        13 hours ago

        I strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.

        It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        13 hours ago

        It knows the answer its giving you is wrong, and it will even say as much. I’d consider that intent.

        • ggppjj@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          13 hours ago

          It is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.

          It apologizes because math says that the most likely response is to apologize.

          Edit: you can just ask it y’all

          https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40

          • masterofn001@lemmy.ca
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            13 hours ago

            Please take a strand of my hair and split it with pointless philosophical semantics.

            Our brains are chemical and electric, which is physics, which is math.

            /think

            Therefor, I am a product (being) of my environment (locale), experience (input), and nurturing (programming).

            /think.

            What’s the difference?

            • 4am@lemm.ee
              link
              fedilink
              English
              arrow-up
              9
              ·
              13 hours ago

              Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.

              Large language models are primitive, rigid, simplistic, and ultimately expensive.

              Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.

              • masterofn001@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 hours ago

                And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?

                “I’m sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix.”

          • Ulrich@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            13 hours ago

            …how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?

            • 4am@lemm.ee
              link
              fedilink
              English
              arrow-up
              7
              ·
              13 hours ago

              The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent

            • Flic@mstdn.social
              link
              fedilink
              arrow-up
              7
              ·
              13 hours ago

              @Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without “knowing” anything more than what other art of that type looks like. But if you look closer you can also see that it doesn’t “know” a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It’s plausible babble.

            • ggppjj@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              ·
              13 hours ago

              What do you believe that it is actively doing?

              Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.

              I will not answer the brain question until LLMs have brains also.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          13 hours ago

          Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.

          That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.

          • Ulrich@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            it just generates an answer based on a mixture of the input and the training data, plus some randomness.

            And is that different from the way you make decisions, fundamentally?

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              13 hours ago

              Idk, that’s still an area of active research. I versatile certainly think it’s very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.

              My understanding is that they’re fundamentally different processes, but since we don’t understand brains perfectly, maybe we happened on an accurate model. Probably not, but maybe.

    • michaelmrose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      You can’t ask it about itself because it has no internal model of self and is just basing any answer on data in its training set