• Dagwood222@lemm.ee
    link
    fedilink
    English
    arrow-up
    57
    ·
    29 days ago

    Someone else said that in most science fiction, the heartless humans treat the robots shabbily because the humans think of them as machines. In real life, people say ‘thank you’ to Siri all the time.

  • rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    17
    ·
    29 days ago

    Pics or it didn’t happen.

    (Seriously, I’d like to see the source of this story. Googling “Tim the pencil” doesn’t bring up anything related.)

    • niucllos@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      29 days ago

      Just sounds like the first episode of community with less context and more soapboxing

  • toynbee@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    29 days ago

    This basically happened in an early (possibly the first?) episode of Community. Likely that was inspired by something that happened in real life, but it would not be surprising if the story in the image was inspired by Community.

    • themeatbridge@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      29 days ago

      It is a classic Pop Psychology/Philosophy legend/trope, predating Community and the AI boom by a wide margin. It’s one of those examples people repeat, because it’s an effective demonstration, and it’s a memorable way to engage a bunch of hung-over first year college students. It opens several different conversations about the nature of the mind, the self, empathy, and projection.

      It’s like the story of the engineering professor who gave a test with a series of instructions, with instruction 1 being “read all the instructions before you begin” followed by things like “draw a duck” or “stand up and sing Happy Birthday to yourself” and then instruction 100 being “Ignore instructions 2-99. Write your name st the top of the sheet and make no other marks on the paper.”

      Like, it definitely happened, and somebody was the first to do it somewhere. But it’s been repeated so often, in so many different classes and environments that it’s not possible to know who did it first, nor does it matter.

  • A_Chilean_Cyborg@feddit.cl
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    28 days ago

    ::: Is just like… chat GPT gets sad when I insult it… idk what to make of that. spoiler

    (Yeah I guess it’s based on texts and in many of those there would have been examples of people getting offended by insults blablablabla… but still.) :::

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    29 days ago

    While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.

    The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.

    For example, the older generation of models will choose to be the little spoon around 70% of the time and the big spoon around 30% of the time if asked 0-shot, as there’s likely a mix in the training data.

    But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.

    It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.

    People who pat themselves on the back for being so wise as to not anthropomorphize are going to be especially surprised by the next 12 months.

  • cynar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    27 days ago

    I just spent the weekend driving a remote controlled Henry hoover around a festival. It’s amazing how many people immediately anthropomorphised it.

    It got a lot of head pats, and cooing, as if it was a small, happy, excitable dog.

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    29 days ago

    I feel like half this class went home saying, akchtually I would have gasped at you randomly breaking a non humanized pencil as well. And they are probably correct.

  • voracitude@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    29 days ago

    I would argue that first person in the image is turned right around. Seems to me that anthropomorphising a chat bot or other inanimate objects would be a sign of heightened sensitivity to shared humanity, not reduced, if it were a sign of anything. Where’s the study showing a correlation between anthropomorphisation and callousness? Or whatever condition describes not seeing other people as fully human?

    I misunderstood the first time around, but I still disagree with the idea that the Turing Test measures how “human” the participant sees other entities. Is there a study that shows a correlation between anthropomorphisation and tendencies towards social justice?