I was just wondering about a general privacy goal of having an LLM bot just flood the zone with random data to try and confound advertising models, simulating clicks and likes/engagement across the spectrum just to wreck any meaningful data correlations.
If you were aiming this concept at two specific targets, i.e., costing the Trump campaign money and screwing with their data, things could get really interesting. Like an open source bot that would coordinate bizarre trends across large cohorts of users to convince the data miners that, for example, a disproportionate number of voters in key regions are demographically or behaviorally skewed.
I like the idea, but I’d worry about getting sued for fraud. Though it’s not likely that would be a top issue what with his trying to stay out of prison.
I’m not a lawyer but I’m not sure how liable you’d be. People run bots all the time. Plus, this is all about numbers. You can’t sue thousands of people like that.
I was just wondering about a general privacy goal of having an LLM bot just flood the zone with random data to try and confound advertising models, simulating clicks and likes/engagement across the spectrum just to wreck any meaningful data correlations.
If you were aiming this concept at two specific targets, i.e., costing the Trump campaign money and screwing with their data, things could get really interesting. Like an open source bot that would coordinate bizarre trends across large cohorts of users to convince the data miners that, for example, a disproportionate number of voters in key regions are demographically or behaviorally skewed.
I like the idea, but I’d worry about getting sued for fraud. Though it’s not likely that would be a top issue what with his trying to stay out of prison.
I’m not a lawyer but I’m not sure how liable you’d be. People run bots all the time. Plus, this is all about numbers. You can’t sue thousands of people like that.
The major networks can determine bots from people.