Will Manidis is the CEO of AI-driven healthcare startup ScienceIO
Lemmy is not safe either.
there isnt so much incentive. No advertisement. Upvote counters behave weirdly in the fediverse (from what i can see).
I politely disagree.
There are no virtual points to earn on Lemmy. So hopefully it will resist the enshitification for while.
Shiri’s Scissor was supposed to be a cautionary tale…
Most people who have worked in customer service would believe every word because they have seen the absurdity of real people.
In the age of A/B testing and automated engagement, I have to wonder who is really getting played? The people reading the synthetically generated bullshit or the people who think they’re “getting engagement” on a website full of bots and other automated forms of engagement cultivation.
How much of the content creator experience is itself gamed by the website to trick creators into thinking they’re more talented, popular, and well-received than a human audience would allow and should therefore keep churning out new shit for consumption?
It’s ultimately about ad money. They haven’t cared it’s humans or bots either. They keep paying out either way. This predates long before the LLM era. It’s bizarre.
It’s pretty much a case of the POSIWID. The system is meant to be genuine human engagement. What the system does is artificial at every step. Turns out its purpose is to fabricate things for bots to engage with. And this is all propped up by people who for some reason pay to keep the system running.
(Already said this before, but let me reiterate:)
Typical AITA post:
Title: AITAH for calling out my [Friend/Husband/Wife/Mom/Dad/Son/Daughter/X-In-Law] after [He/She] did [Undeniably something outrageous that anyone with an IQ above 80 should know its unacceptable to do]?
Body of post:
[5-15 paragraph infodumping that no sane person would read]
I told my friend this and they said I’m an asshole. AITAH?
Comments:
Comment 1: NTA, you are abosolutely right, you should [Divorce/Go No-Contact/Disown/Unfriend, the person] IMMEDIATELY. Don’t walk away, RUNNN!!!
Comment 2: NTA, call the police! That’s totally unacceptable!
And sometimes you get someone calling out OP… 3: Wait, didn’t OP also claim to be [Totally different age and gender and race] a few months ago? Heres the post: [Link]
🙄 C’mon, who even think any of this is real…
Man, sometimes when I finish grabbing something I needed from Reddit, I hit the frontpage (always logged out) just out of morbid curiosity.
Every single time that r/AmIOverreacting sub is there with the most obvious “no, you’re not” situation ever.I never once seen that sub show up before the exodus. AI or not, I refuse to believe any frontpage posts from that sub are anything other than made up bullshit.
If it’s well-written enough to be entertaining, it doesn’t even matter whether it’s real or not. Something like it almost certainly happened to someone at some point.
Needs to feature both a wedding and a pregnancy and you’ve nailed it
insert plot from an episode of Friends
AITAH?
I feel like we’re collectively writing the custom instructions for this bot.
Way too many…
I was born before the Internet. The Internet is always lumped into the “entertainment” part of my brain. A lot of people that have grown up knowing only the Internet think the Internet is much more “real”. It’s a problem.
I’ve come up with a system to categorize reality in different ways:
Category 1: Thoughts inside my brain formed by logics
Category 2: Things I can directly observe via vision, hearing, or other direct sensory input
Category 3: IRL Other people’s words, stories, anecdotes, in face to face conversations
Category 4: Acredited News Media, Television, Newspaper, Radio (Including Amateur Radio Conversations), Telegrams, etc…
Category 5: The General Internet
The higher the category number, means the more distant that information is, and therefore more suspicious I am.
I mean like, if a user on Reddit (or any internet fourm or social media for that matter) told me X is a valid treatment for X disease without like real evidence, I’m gonna laugh in their face (well not their face, since its a forum, but you get the idea).
I would recommend switching categories one and two. Sometimes our thoughts are fucked.
Vision is processed in our brains
So here’s the thing:
I sometimes though I saw a ghost moving in a dark cornet of my eyes.
I didn’t see a ghost.
But then later I walk through the same place again, and also saw the same vision, but I already held the belief that ghosts dont exist, so I investigated, it turned out to be a lamp (that was off) that casted a shadow of another light source, so, when I happend to walk though the area, the shadow moved, and combined with my head turning motion, it made it appear like a ghost was there, but it was just a difference in lighting, a shadow. Not a ghost. I bet a lot of “ghosts” could be just interpreting lighting wrong and think its a ghost, not an actual ghost.
Having you thoughts/logics prioritized is important to find the truth, and not just start believing the first thing you interpret like a vision of a “ghost”.
You know what, that’s entirely fair.
deleted by creator
ESH
Look at that, the detection heuristics all laid out nice and neatly. The only issue is that Reddit doesn’t want to detect bots because they are likely using them. Reddit at one point was using a form of bot protection but it wasn’t for posts; instead, it was for ad fraud.
Oh boy, identity mechanics to curb out the last vestiges of privacy.
let me scan your eyeballs. it’s the only way
I wonder where people in the future will get their information from. What trustworthy sources of information are there? If the internet is overrun with bots, then you can’t really trust anything you read there, as it could all be propaganda. What else to do, though, to get your news?
That’s the killer app right there: the complete inability for the common person to distinguish between true and false. That’s what they’re going for.
Also doesn’t fix the problem at all, I can still just use AI to post to my main account
They’re pretty much declaring a war on VPNs also
Yep. More than half the time I can’t access Reddit through Proton VPN.
Yeah, a real problem solver would probably be to remove the incentive for someone to do this.
It would probably be far less likely for someone to do that on lemmy, as there is no karma and you dont get paid for upvotes or something. (Still there are incentives, like creating credibility, celebrity accounts, maybe influence public opinion, self-pleasure from seeing upvotes to “your” posts/comments etc., but they arent such potent incetives as directly monetary incetives.)
you could try to cook up some kind of trust chain, without totally abandoning privacy.
Get a government-certified agencies minting master key tied to your id. You only get one, with trust rating tied to it.
With that master key you can generate infinite amount of sub-ids that dont identify you but show your trust rating(fuzzed).
Have a cross-network reporting system that can lower that rating for abuses like botting.
idk Im just spitballing
I dunno, part of me is ok with it. It’s clear to me how bad things are going to get. So having certain platforms or spaces with some level of public identity validation seems like it might be ok…
Well, it’s a great method to find people to target for political speech.
Especially when it’s about gathering real information. When everything you read is written by an anonymous author, you’d have no chance to know whether it’s true or wrong, except if it’s a paper on theoretical maths of course.
It’s stupidly easy to make up stuff on AITA and get upvotes/comments. I made up one just for fun and was surprised at how popular it got. Well, now not so much, but back when I did.
If you know the audience and what gets them upset, you’ve got easy karma farming.
Two weeks ago someone on one of those story subs, I think it was amioverreacting, was milking off karma making updates. They made 5 posts about the whole thing and even started to sell merch to profit in real life until they took the last post down.
Maybe they’re using the subreddit to try to train morality into the model?
Is reddit still feeding Googles LLM or was it just a one time thing? Meaning will the newest LLM generated posts feed LLMs to generate posts?
The truly valuable data is the stuff that was created prior to LLMs, anything after this is tainted by slop. Any verifiable human data would be worth more, which is why they are simultaneously trying to erode any and all privacy
I’m not sure about that. It implies that only humans are able to produce high-quality output. But that seems wrong to me.
- First of all, not everything that humans produce has high quality; rather, the opposite.
- Second, with the development of AI i think it will be very well possible for AI to generate good-quality output in the future.
Microsoft’s PHI-4 is primarily trained on synthetic (generated by other AIs) data. It’s not a future thing, it’s been happening for years
These days the LLMs feed the LLMs so you can model models unless you’re excluding any public data from the last decade. You have to assume all public data based on users is tainted when used for training.
Why not? r/AmlTheAsshole is about entertainment, not truth. It would be an indictment of AI if it couldn’t replicate a short, funny story.