Such a clickbaity article.
Here’s the meat of it:
Have they finally achieved consciousness and this is how they show it?!
No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far. These models don’t care about what is and isn’t random. They don’t know what “randomness” is! They answer this question the same way they answer all the rest: by looking at their training data and repeating what was most often written after a question that looked like “pick a random number.” The more often it appears, the more often the model repeats it.
No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far.
No, you are taking it too far before walking it back to get clicks.
I wrote in the headline that these models “think they’re people,” but that’s a bit misleading.
“I wrote something everyone will know is bullshit in the headline to get you to click on it before denouncing the bullshit in at the end of the article as if it was a PSA.”
I am not sure if I could loathe how ‘journalists’ cover AI more.
“because they think they are people” … hmmmmmmmmmmmmm this quote makes my neurons stop doing synapse
You guys have favorite numbers?
That was my thought. Am I not a person?
they think they’re people
That’s kinda sad if true.
They don’t think at all.
I know, I know…
He knows, he knows!
Except that they don’t think anything at all - they’re just statistics machines, and the author clarified. Clickbaity headline.
Leave me and my anthropomorphizing alone! 😭