Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    This seems like a potential actual good use of AI. Can’t have been much fun to train it though.

    And is there any risk of people turning these kinds of models around and using them to generate images?

    • Jimbabwe@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      If AI was reliable, maybe. MAYBE. But guess what? It turns out that “advanced autocomplete” does a shitty job of most things, and I bet false positives will be numerous.

      • AwesomeLowlander@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        “detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster.”

        False positives don’t matter if they stick to the stated intended purpose of making it easier to detect CSAM manually.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 days ago

          The problem is that they won’t.

          Yes, AI tools, in the hands of skilled people, can be very helpful.

          But “AI” in capitalism doesn’t mean “more effective workers”, it means “fewer workers.” The issue isn’t technological so much as cultural. You fundamentally cannot convince an MBA not to try to automate away jobs.

          (It’s not even a money thing; it’s about getting rid of all those pesky “workers rights” that workers like to bring with us)