For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn’s CSAM detection tool, Safer, the new “Predict” feature uses “advanced machine learning (ML) classification models” to “detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster.”

The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline, relying on real CSAM data to detect patterns in harmful images and videos. Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight. It could potentially be used to probe suspected CSAM rings proliferating online.

  • ben@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 days ago

    This sounds like a bad idea, there’s already cases of people getting flagged for CSAM by sending photos of their children to doctors.

  • FourPacketsOfPeanuts@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 days ago

    This seems like a lot of risky effort for something that would be defeated by even rudimentary encryption before sending?

    Mind you if there were people insane enough to be sharing csam “in the clear” then it would be better to catch them than not. I just suspect most of what’s going to be flagged by this will be kids making inappropriate images of their classmates

  • fl42v@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    It could be a very useful tool, indeed, but I wouldn’t trust disphits who use “proprietary” as if its something to be proud of. If they really wanted to “protect the children”, they should’ve at least released the weights, IMO (given releasing the training data is illegal as fuck)