I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    So the paper that found that particular bit in Othello was this one: https://arxiv.org/abs/2310.07582

    Which was building off this earlier paper: https://arxiv.org/abs/2210.13382

    And then this was the work replicating it in Chess: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation

    It’s not by chance - there’s literally interventions where flipping a weight or vector results in the opposite behavior (like acting like a piece is in a different place, or playing well he badly no matter the previous moves).

    But it’s more that it seems unlikely that there’s any actual ‘feeling’ or ‘conscious’ sentience/consciousness to understand beyond the model knowing what the abstracted pattern means in relation to the inputs and outputs. It probably is simulating some form of ego and self, but not actively experiencing it if it makes sense.