• WatDabney@sopuli.xyz
    link
    fedilink
    arrow-up
    17
    ·
    1 month ago

    Probably an odd take, but this is actually something I sort of like about this timeline.

    I keep getting this amusing visual image of actual people tiptoing away and giggling and shushing each other, as somewhere in the background, the site they used to be on is nothing but corporations showing ads to bots posting to bots.

  • Etterra@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    1 month ago

    Yeah it’s a room full of bots talking to bots while other bots try to scam bots into paying other bots. It’s recursive botshit.

    • pivot_root@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 month ago

      Don’t forget that it’s also being used to train more bots. Lotsa bot inbreeding—or inbotting, if you will.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    10
    ·
    1 month ago

    Theory of Reddit in 2024 is becoming surrealistic to watch:

    The bot problem is not even in its final form. It’s probably way worse already, as they aren’t detecting all pieces of bot content, only the blatantly obvious ones. And eventually the mods are going to say “you know what… fuck it, too much effort” and simply leave the bots alone, leading to a further increase of bot activity, in a vicious circle.

  • TheDannysaur@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 month ago

    This post is both insightful and troubling. Using generative AI services to simulate conversations without explicit disclosure can be seen as unethical. Some might argue that this damages the connection that users can feel towards each other, even in an online community. Such matters should be addressed in order to restore consumer trust in the platform.

    (I wrote that to sound like a GenAI response, how did I do?)

  • Funkwonker@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    I’m rather certain that a number of the ‘top level’ subreddits are not only aware and OK with the bots on their subs but are intentionally keeping them around to boost activity.

    I kept an account around for a good while after the API changes until it was perma banned for “report abuse.” I only reported bots who were stealing comments word for word.

  • I_Has_A_Hat@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Even before the API change, reposts were becoming rampant. For a while, this wasn’t too bad as new people got to see and enjoy them. The real issue started when all the top level comments, and the replies to those comments, all started to be word for word identical to the comments from the last time it was posted. A copy of a copy of a copy.

    I am convinced that most of the big subreddits no longer have any real human engagement in the top level comments. It’s all just bots talking to each other.