I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    You could argue that embeddings constitute some kind of stored knowledge. But I do agree with your larger point, LLMs are getting to much credit because of the language we use to describe them