Artificial intelligence is spurring a new type of identity theft — with ordinary people finding their faces and words twisted to push often offensive products and ideas

  • paridoxical@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    In your example at the end, why can’t the other people’s faces be blurred out before releasing the photo? Just playing devil’s advocate on that point.

    • silence7@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Because it’s a pain to go do (and was especially so in the film era) and it change what the photo conveys in a meaningful way.

      Think of for example a photo like this, showing anti-civil-rights protesters in 1969:

      Blurring the faces would meaningfully obscure what was going on, and confuse people about who held what kinds of views.

      • paridoxical@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Historically, that is correct. However, the technology to automate this is extremely accessible now and low/no cost. Also, there was no widespread threat of misuse via AI in the past, so I get that there was no need in the past. Going forward, I think it’s something we need to think about.

        Today, the same photo you presented could be misused with AI to meaningfully obscure what is going on and confuse people about who held what kind of views. So there’s a double-edged sword here.

        Just to be clear, I do believe in the right to photograph anyone and anything in public, at least in the United States and any other countries that respect that freedom. I’m just trying to point out that the issue is complicated.