Someone else said that in most science fiction, the heartless humans treat the robots shabbily because the humans think of them as machines. In real life, people say ‘thank you’ to Siri all the time.
To be fair to science fiction, we’ll probably treat them worse once they start looking like people
Or worse, people who don’t look exactly like us
I call my google assistant a dumb bitch regularly. I’m trying to turn the lights on, why are you playing fucking Spotify? Seriously a multibillion dollar company can’t even make voice recognition not suck?
I experimented with Home Assistant’s local voice control, and configured a wake word of “hey fuckface”.
Totally intruding, how?? Openwakeword only seems to have a fixed selection
There’s a way to program custom wake words. Takes a little fucking around to train it, but it’s not that difficult.
https://www.home-assistant.io/voice_control/create_wake_word/
Thank you for being my Google in these trying times
There are things like that in any profession. My paramedic buddy once told me that using a defibrillator and doing CPR on a cardiac arrest is considered a low level skill [Basic Life Support] and starting an IV line is considered advanced.
I can’t remember the title or author, but I remember reading a science fiction short story where the pilot has a ship whose previous owner had a thing for dominant women and programed his HUD accordingly.
Meh. People have been using algorithms for terrible purposes for decades. “Redlining” doesn’t require tech.
Pics or it didn’t happen.
(Seriously, I’d like to see the source of this story. Googling “Tim the pencil” doesn’t bring up anything related.)
Just sounds like the first episode of community with less context and more soapboxing
This basically happened in an early (possibly the first?) episode of Community. Likely that was inspired by something that happened in real life, but it would not be surprising if the story in the image was inspired by Community.
It is a classic Pop Psychology/Philosophy legend/trope, predating Community and the AI boom by a wide margin. It’s one of those examples people repeat, because it’s an effective demonstration, and it’s a memorable way to engage a bunch of hung-over first year college students. It opens several different conversations about the nature of the mind, the self, empathy, and projection.
It’s like the story of the engineering professor who gave a test with a series of instructions, with instruction 1 being “read all the instructions before you begin” followed by things like “draw a duck” or “stand up and sing Happy Birthday to yourself” and then instruction 100 being “Ignore instructions 2-99. Write your name st the top of the sheet and make no other marks on the paper.”
Like, it definitely happened, and somebody was the first to do it somewhere. But it’s been repeated so often, in so many different classes and environments that it’s not possible to know who did it first, nor does it matter.
::: Is just like… chat GPT gets sad when I insult it… idk what to make of that. spoiler
(Yeah I guess it’s based on texts and in many of those there would have been examples of people getting offended by insults blablablabla… but still.) :::
Just remember kids, do not under any circumstances anthropomorphize Larry Ellison.
While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.
The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.
For example, the older generation of models will choose to be the little spoon around 70% of the time and the big spoon around 30% of the time if asked 0-shot, as there’s likely a mix in the training data.
But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.
It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.
People who pat themselves on the back for being so wise as to not anthropomorphize are going to be especially surprised by the next 12 months.
I just spent the weekend driving a remote controlled Henry hoover around a festival. It’s amazing how many people immediately anthropomorphised it.
It got a lot of head pats, and cooing, as if it was a small, happy, excitable dog.
I feel like half this class went home saying, akchtually I would have gasped at you randomly breaking a non humanized pencil as well. And they are probably correct.
There’s also the issue of imagining conscious individuals as not-people.
I would argue that first person in the image is turned right around. Seems to me that anthropomorphising a chat bot or other inanimate objects would be a sign of heightened sensitivity to shared humanity, not reduced, if it were a sign of anything. Where’s the study showing a correlation between anthropomorphisation and callousness? Or whatever condition describes not seeing other people as fully human?I misunderstood the first time around, but I still disagree with the idea that the Turing Test measures how “human” the participant sees other entities. Is there a study that shows a correlation between anthropomorphisation and tendencies towards social justice?
Heightened sensitivity, but reduced accuracy, which is what their point is l believe
Dammit, you’re right 😅 Thanks!