ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • snooggums@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    To be honest this is the kind of outcome I expected.

    Garbage in, garbage out. Making the system more complex doesn’t solve that problem.

    • Ekky@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.

      As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 months ago

        LLM’s are not “machine learning”, they are neural-networks.

        Different category.

        ML is small potatoes, ttbomk.

        Decision-tree stuff.

        Neural-nets are black-boxes, with back-propagation training of the neural-net to get closer to ( layer by layer, training-instance by training-instance ) the intended result.

        ML is what one does on one’s own machine with some python libraries,

        ChatGPT ( 3, 3.5, or 4, don’t know which ) cost something like $100,000,000 to rent the machines required for mixing the training-data & the model ( I’m assuming about $20/hr per machine, so an OCEAN of machines, to do it )

        _ /\ _

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.

      We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 months ago

        Damn.

        Thank you VERY much for that insight: AI’s version of Kessler-syndrome.

        EXACTLY.

        Damn, damn, damn, that gets the truth right in its marrow.

        _ /\ _