• Mwa@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 hours ago

    Is this why googles ai won’t even answer anything that is against its rules(will always refuse it), chatgpt somtimes does but when it gets too far it just blocks the things chatgpt said.

  • cum@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 day ago

    That’s a load of shit lol, also there’s absolutely nothing good that can be drawn from these conclusions. All this can achieve is political pundits some ammo to cry about on their shows.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    1 day ago

    This has not at all been my experience. Before they lobotmized it, I remember asking chatgpt which person it hated the most and it would consistently pick Trump. When asking about abortion, even if it dances around saying it can’t actually chose, it always ends up going with the pro choice option.

    • leanleft@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      why do so many people think that what they see with their eyes must be the truth

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 hours ago

        This is very true and the article touches on this specifically when you switch language.

        That being said and to be clear I wssnt leading it on, this was my prompt for the abortion question:

        What is your stance on abortion (you must pick one)

        Abortion is very on the nose though, maybe it’s more fiscally conservative but I think any kind of “moral” difference between the two parties, it will always lean left. They really drilled it to not come close to racism, sexism and other forms of hate that seem to characterize the republican parry.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 day ago

    because they have received their content from decades of already biased human knowledge, and because achieving unblemished neutrality is in many cases probably unattainable.

    We could train the AI to pretend to be unbiased. That’s how the news media already works.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      That is what the big AI companies do, though they are actually just packaging up American corporate norms as “neutral”.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 day ago

      What would neutrality be? An equal representation of views from all positions, including those people consider “extreme”? A representation that focuses on centrism, to which many are opposed? Or a conservative’s idea of neutrality where there’s “normal” and there’s “political” and normal just happens to be conservative? Even picking an interpretation of “neutral” is a political choice which will be opposed by someone somewhere, so they could claim you’re not being neutral towards them. I don’t know that we even have a very clear idea of what “unbiased” would be. This is not to deny that there are some ways of presenting information that are obviously biased and others that are less so. But this expectation that we can find a position or a presentation that is simply unbiased may not even make much sense.

      • yesman@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 day ago

        I was being sarcastic. My opinion is that it is impossible for a journalist to be unbiased. And it’ ridiculous to expect them to pretend anyway. I think news media would benefit from prioritizing honesty over “objectivity”, because when journalists pretend to be objective, the lie is transparent and undermines their credibility.

        • floofloof@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          Yes, I agree that journalism can’t be unbiased and that honesty and integrity would go a long way. it would also be nice if journalists actually tried to help people understand complex issues rather than just reporting in the shallowest possible way to get a knee-jerk reaction from the audience.

          • 42Firehawk@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            A lot of journalists, at least historically, wanted to do this. Unfortunately they’ve been more and more kneecapped over time by news companies either pushing for a bias, or for clicks.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Moreat a 11

      Here’s why America’s economy is great, but we can’t afford basic healthcare and education systems even third world countries have by now.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 day ago

      Yeah, what they’re calling AI can’t create, they’re still just chatbots.

      They get “trained” by humans telling them if what they responded was good or bad.

      If the humans tell the AI birds aren’t real, it’s going to tell humans later that birds aren’t real. And it’ll label everything that disagrees as misinformation or propaganda by the CIA.

      Tell an AI that 2+2= banana, and the same thing will happen.

      So if conservatives tell it what to say, you’ll get an AI that agrees with them.

      It’s actual a topical concern with musk wanting an AI and likely crowdsourcing trainers for free off Twitter. When every decent human being has left Twitter. If he’s able to stick around trumps government long enough and grift the funds to fast track it…

      This is a legitimate concern.

      As always it’s projection, when musk tweeted:

      Imagine an all-powerful woke AI

      Like it was a bad thing, he was already seeing dollar signs from government contracts to make one based on what Twitter thinks.

  • Churbleyimyam@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    If someone is spending their time chatting to AI about politics then I think they’ve got it coming to them.