• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: November 21st, 2023

help-circle


  • “Knowledge is never useless”

    Going on a tangent here: While I fully agree with the above, there is an amount of knowledge after which fact checking becomes bothersome, and some people just skip fact checking overall. One could argue that, while knowledge is never useless, unchecked knowledge might become bothersome or dangerous.

    See flatearthers, scientology, etc. for extreme examples.



  • Ekky@sopuli.xyztoScience Memes@mander.xyzCorn 🌽
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    We get to choose the genes when genetically modifying, and it usually takes a few years (plus health metrics and research once complete).

    Contrary, when selectively breeding we can breed for traits which we are not guaranteed to actually get, and it takes a few decades (plus health metrics and research once complete).


  • I’m with you here, Neptune’s definition seems to overspecify the extract from Oxford they presented.

    If we boil stereotyping down to its core components, then it appears to simply be an instance of correlation using subjective and non-complete data: “This individual exerts traits a, b, and c, which means they are highly likely to also exert traits x, y, and z.”

    Or: “This individual is operating a car (unique trait/type of person), therefore their visibility and attention capacity are likely reduced or under strain (overgeneralization as driving might come natural to them, and fixed as I might assume that no one is a natural).”

    ^This is, of course, an oversimplification, as I’m going purely by Neptune’s words and my own understanding, and have not looked up additional sources.



  • A new “fuckcars”-like community whose name doesn’t even target the source of their frustration? Neat.

    Gamedevs, researchers, and factory engineers sitting in a corner mumbling something about “appropriation”.

    I’ve said it before and I’ll say it again. Microsoft’s and OpenAI’s hijacking of the term “AI” to mean “LLM”, and those who just blindly follow along and thereby help alienating those who work with AI (not LLM), are a sickness.





  • Luckily that was only the abbreviation and not the actual word. I know that language changes all the time, constantly, but I still find it annoying when a properly established and widely (within reason) used term gets appropriated and hijacked.

    I mean, I guess it happens all the time in with fiction, and in sciences you sometimes run into a situation where an old term just does not fit new observations, but please keep your slimy, grubby, way-too-adhesive, klepto-grappers away from my perfectly fine professional umbrella terms. :(

    Please excuse my rant.


  • LLMs (or really ChatGPT and MS Copilot) having hijacked the term “AI” is really annoying.

    In more than one questionnaire or discussion:

    Q: “Do you use AI at work?”

    A: “Yes, I make and train CNN (find and label items in images) models etc.”

    Q: “How has AI influenced your productivity at work?”

    A: ???

    Can’t mention AI or machine learning in public without people instantly thinking about LLM.