- cross-posted to:
- news_summary
- cross-posted to:
- news_summary
ChatGPT is seeing a rightward shift on the political spectrum in how it responds to user queries, a new study has found.
no shit AI is just an attempt to inject computers with the essence of rightwing post truth might-makes-right lazy mob violence and rule. Lol did people NOT see this coming??..?
LLMs are heavily training on other LLM output at this point because the internet is flooded with it. So if people are running tons of bot farms with right wing bullshit, it’s going to get trained on more and more of that.
LLMs were always doomed to train themselves on the slop and that’s sort of why they’re unsalvageable.
Because conservatives came out the woodwork after Trump was elected? Now their voices are becoming more mainstream, not so drowned out? I’m sure fascism would have taken an LLM leap after Hitler was elected.
the observed shifts were not directly linked to changes in training datasets
Anyone break that down for me? Using the same sources but the data in those sources changed?
ChatGPT and any AI model will trend however its training data trends. If right-wing content trends upward online, it will influence model output.
Does anyone know of any metrics for measuring models political bias that isn’t just the political compass?
Oh and would you just look at what’s happening to its input data, on those corporate social media sites… Imagine that, huh.
(In reality, peer review timelines being what they are, this trend is probably a year or two old.)
I wonder if the hate mills like the IRA have much to do with it with intent
For my own edification, what is this IRA?
An active Russian troll farm that works on spreading hate and division on the internet.
Thank you
Come out you black and tans
Come out and fight me like a man
Well this is lovely. As they say, you are what you eat. If you eat monsters (or the writing of monsters) …….