Artificial intelligence is spurring a new type of identity theft — with ordinary people finding their faces and words twisted to push often offensive products and ideas

    • PhlubbaDubba@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      To a certain extent that’s just impossible since the degrees you’d have to go to for enforcement would inevitably lead to someone genuinely talking about a product they prefer getting penalized. Think of the episode of south park where the one girl turns out to be a living advert.

      There has to be serious limitation on bandwidth usage. First to ban targeted advertisement, second to make national, state level, and local adverts all get equal air time, and third to cut the amount of broadcast time that can be used for advertisement and page space that can have advertisements.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 months ago

    Google has decided to build a platform where advertisers are minimally vetted. They’re intentionally taking on the risk and should be liable.

    If you decide to increase attendance in your club by getting rid of the bouncer, expect the fire marshal and cops to issue fines when your place is overcrowded and full of minors.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      This. The lack of vetting sucks and it goes both ways. Sometimes the algorithm incorrectly flags perfectly legitimate content as fraudulent with no way to recover from that.

    • elshandra@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Which is interesting in itself, what if AI by chance produces a likeness of you, unintentionally. Is there an AI that has a database of all of us to know that? I’m sure they’re trying, for whatever reason.

      Now, if you’re someone famous, like a pop star or president, chances are there are a lot more images of you in those databases, which could also skew the resulting images.

      So I guess, what we really need is some way to trust the image, otherwise … I really don’t know how this can be avoided, maybe a smarter entity does.

    • silence7@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      In the US, kinda sorta.

      Advertisers are liable if they use your likeness to promote a product, imply endorsement, or otherwise make commercial use of it without your consent. This gives you the right to sue, which is worth absolutely nothing when you’re dealing with a shady overseas shell company hawking fake Viagra.

      News organizations, artists, and random private individuals can publish a photo or other image of you taken in a place where you do not have a reasonable expectation of privacy without having to contact you or have your consent. This is important: think of trying to share a photograph of a public event, and having to track down people in the background, or create public awareness when you photograph politician committing a crime.

      • paridoxical@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        In your example at the end, why can’t the other people’s faces be blurred out before releasing the photo? Just playing devil’s advocate on that point.

        • silence7@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Because it’s a pain to go do (and was especially so in the film era) and it change what the photo conveys in a meaningful way.

          Think of for example a photo like this, showing anti-civil-rights protesters in 1969:

          Blurring the faces would meaningfully obscure what was going on, and confuse people about who held what kinds of views.

          • paridoxical@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Historically, that is correct. However, the technology to automate this is extremely accessible now and low/no cost. Also, there was no widespread threat of misuse via AI in the past, so I get that there was no need in the past. Going forward, I think it’s something we need to think about.

            Today, the same photo you presented could be misused with AI to meaningfully obscure what is going on and confuse people about who held what kind of views. So there’s a double-edged sword here.

            Just to be clear, I do believe in the right to photograph anyone and anything in public, at least in the United States and any other countries that respect that freedom. I’m just trying to point out that the issue is complicated.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    No federal deepfake law exists

    Does it need a deepfake law??

    “Federal” tells us that this has happened in the US and A.

    So, don’t you Usamericans have any basic human rights that tell everybody that this is illegal right from start?

    • silence7@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      It’s considered a civil dispute. You can sue to those using your face in an ad for monetary damages, which in practice means you’re trying to sue an overseas shell corporation with no assets, and can’t get anything, so no lawyer will represent you.