• El Barto@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    1 month ago

    Such a clickbaity article.

    Here’s the meat of it:

    Have they finally achieved consciousness and this is how they show it?!

    No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far. These models don’t care about what is and isn’t random. They don’t know what “randomness” is! They answer this question the same way they answer all the rest: by looking at their training data and repeating what was most often written after a question that looked like “pick a random number.” The more often it appears, the more often the model repeats it.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    30 days ago

    No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far.

    No, you are taking it too far before walking it back to get clicks.

    I wrote in the headline that these models “think they’re people,” but that’s a bit misleading.

    “I wrote something everyone will know is bullshit in the headline to get you to click on it before denouncing the bullshit in at the end of the article as if it was a PSA.”

    I am not sure if I could loathe how ‘journalists’ cover AI more.

  • geography082@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    30 days ago

    “because they think they are people” … hmmmmmmmmmmmmm this quote makes my neurons stop doing synapse