Hurr hurr, I’m gonna plot f(x,y)=x2+y3 where y=x for x limit inf. Checkmate science!
Edit: the graph isn’t actually linear, man, and here I just thought it’d be that easy. :(
Hurr hurr, I’m gonna plot f(x,y)=x2+y3 where y=x for x limit inf. Checkmate science!
Edit: the graph isn’t actually linear, man, and here I just thought it’d be that easy. :(
“Knowledge is never useless”
Going on a tangent here: While I fully agree with the above, there is an amount of knowledge after which fact checking becomes bothersome, and some people just skip fact checking overall. One could argue that, while knowledge is never useless, unchecked knowledge might become bothersome or dangerous.
See flatearthers, scientology, etc. for extreme examples.
Huh, is Windows screwing over GRUB and Linux not a bi-monthly experience anymore? Sad that it happened, but glad that it’s become novelty enough to write about.
We get to choose the genes when genetically modifying, and it usually takes a few years (plus health metrics and research once complete).
Contrary, when selectively breeding we can breed for traits which we are not guaranteed to actually get, and it takes a few decades (plus health metrics and research once complete).
I’m with you here, Neptune’s definition seems to overspecify the extract from Oxford they presented.
If we boil stereotyping down to its core components, then it appears to simply be an instance of correlation using subjective and non-complete data: “This individual exerts traits a, b, and c, which means they are highly likely to also exert traits x, y, and z.”
Or: “This individual is operating a car (unique trait/type of person), therefore their visibility and attention capacity are likely reduced or under strain (overgeneralization as driving might come natural to them, and fixed as I might assume that no one is a natural).”
^This is, of course, an oversimplification, as I’m going purely by Neptune’s words and my own understanding, and have not looked up additional sources.
Enshittification-wise it is both, since the current main reason for AI enshittification is the LLM enshittification bleeding out. But focusing on AI as a whole for being the problem would not be unlike “fuckcars” being called “fucktransportation”.
A new “fuckcars”-like community whose name doesn’t even target the source of their frustration? Neat.
Gamedevs, researchers, and factory engineers sitting in a corner mumbling something about “appropriation”.
I’ve said it before and I’ll say it again. Microsoft’s and OpenAI’s hijacking of the term “AI” to mean “LLM”, and those who just blindly follow along and thereby help alienating those who work with AI (not LLM), are a sickness.
Yup, and unless you let it dry in for a few hours after eating, then final cleanup should be done in a jiffy.
Makes sense, I thought it was about goons (crime) or gooning (evil goon voice “what up, boss?”, “Yes boss”).
And then links to a similar sounding but ultimately totally unrelated site.
Luckily that was only the abbreviation and not the actual word. I know that language changes all the time, constantly, but I still find it annoying when a properly established and widely (within reason) used term gets appropriated and hijacked.
I mean, I guess it happens all the time in with fiction, and in sciences you sometimes run into a situation where an old term just does not fit new observations, but please keep your slimy, grubby, way-too-adhesive, klepto-grappers away from my perfectly fine professional umbrella terms. :(
Please excuse my rant.
LLMs (or really ChatGPT and MS Copilot) having hijacked the term “AI” is really annoying.
In more than one questionnaire or discussion:
Q: “Do you use AI at work?”
A: “Yes, I make and train CNN (find and label items in images) models etc.”
Q: “How has AI influenced your productivity at work?”
A: ???
Can’t mention AI or machine learning in public without people instantly thinking about LLM.
Spring cleaned my car today, making sure to get all those hard to reach places where salt from winter might have accumulated.
I know you’re supposed to properly wash it once a month, but that’s not happening.
Also, I’m super hyped for Homeworld 3! \o/
Multi-target laser guided auto-lock eye upgrade for sports now available at your local augmentation center. Ads included!
So the way that promotion is worded, if I stream “the finals” to a friend for 15 minutes I’m guaranteed to receive that item, shipping paid for and all, like in a quest? Or are they just baiting and entering you in a lottery?
Seems like an easy way to farm real life merchandise.
Jup, most definitely!
I’d much rather have just one unhinged uncle at St. Martin’s Day than having everybody come off as the unhinged uncle by lack of supervision of the LLMs talking in your place, making it seem like being unhinged is normal and thereby creating artificial peer pressure in a truly wicked exercise of laziness.
It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.
As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.
Dunno about kids, but I’ve seen my fair share of grown men who appear to think so.
Ladders tend to be more stable if you lean them on the tree trunk, and not the branch you’re about to saw off.