LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.
There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics
The acronym AI has been used in game dev for ages to describe things like pathing and simulation which are almost invariably algorithms (such as A* used for autonomous entities to find a path to a specific destination) or emergent behaviours (which are also algorithms were simple rules are applied to individual entities - for example each bird on a flock - to create a complex whole from many such simple agents, and example of this in gamedev being Steering Behaviours, outside gaming it would be the Game Of Life).
They didn’t so much “evolve” as AI scared the shit out of us at such a deep level we changed the definition of AI to remain in denial about the fact that it’s here.
Since time immemorial, passing a Turing test was the standard. As soon as machines started passing Turing tests, we decided Turing tests weren’t such a good measure of AI.
But I haven’t yet seen an alternative proposed. Instead of using criteria and tasks to define it, we’re just arbitrarily saying “It’s not AGI so it’s not real AI”.
In my opinion, it’s more about denial than it is about logic.
Oops accidentally submitted. If someone disagrees with this as a fair challenge, let me know why.
I’ve been presenting this challenge repeatedly and in my experience it leads very quickly to the fact that nobody — especially not the experts — has a precise definition of AGI
I’ve given up trying to enforce the traditional definitions of “moot”, “to beg the question”, “nonplussed”, and “literally” it’s helped my mental health. A little. I suggest you do the same, it’s a losing battle and the only person who gets hurt is you.
Op is an idiot though hope we can agree with that one.
Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.
Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.
There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.
What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.
Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder.
Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock.
Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.
Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.
AI traditionally meant now-mundane things like pathfinding algorithms. The only thing people seem to want Artificial Intelligence to mean is “something a computer can almost do but can’t yet”.
AI is, by definition these days, a future technology. We think of AI as science fiction so when it becomes reality we just kick the can in the definition.
Almost a good take. Except that AI doesn’t exist on this planet, and you’re likely talking about LLMs.
In 2022 AI evolved into AGI and LLM into AI. Languages are not static as shown by old English. Get on with the times.
Changes to language to sell products are not really the language adapting but being influenced and distorted
LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.
There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics
The acronym AI has been used in game dev for ages to describe things like pathing and simulation which are almost invariably algorithms (such as A* used for autonomous entities to find a path to a specific destination) or emergent behaviours (which are also algorithms were simple rules are applied to individual entities - for example each bird on a flock - to create a complex whole from many such simple agents, and example of this in gamedev being Steering Behaviours, outside gaming it would be the Game Of Life).
They didn’t so much “evolve” as AI scared the shit out of us at such a deep level we changed the definition of AI to remain in denial about the fact that it’s here.
Since time immemorial, passing a Turing test was the standard. As soon as machines started passing Turing tests, we decided Turing tests weren’t such a good measure of AI.
But I haven’t yet seen an alternative proposed. Instead of using criteria and tasks to define it, we’re just arbitrarily saying “It’s not AGI so it’s not real AI”.
In my opinion, it’s more about denial than it is about logic.
You’re using AI to mean AGI and LLMs to mean AI. That’s on you though, everyone else knows what we’re talking about.
Nobody has yet met this challenge:
Anyone who claims LLMs aren’t AGI should present a text processing task an AGI could accomplish that an LLM cannot.
Or if you disagree with my
Oops accidentally submitted. If someone disagrees with this as a fair challenge, let me know why.
I’ve been presenting this challenge repeatedly and in my experience it leads very quickly to the fact that nobody — especially not the experts — has a precise definition of AGI
https://arxiv.org/abs/2303.12712 has a good take on this question
Words have meanings. Marketing morons are not linguists.
As someone who still says a kilobyte is 1024 bytes, i agree with your sentiment.
Amen. Kibibytes my ass ;)
I’ve given up trying to enforce the traditional definitions of “moot”, “to beg the question”, “nonplussed”, and “literally” it’s helped my mental health. A little. I suggest you do the same, it’s a losing battle and the only person who gets hurt is you.
https://www.merriam-webster.com/dictionary/artificial intelligence
Op is an idiot though hope we can agree with that one.
Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.
There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.
What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.
Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.
Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.
The term has been stolen and redefined . It’s pointless to be pedantic about it at this point.
AI traditionally meant now-mundane things like pathfinding algorithms. The only thing people seem to want Artificial Intelligence to mean is “something a computer can almost do but can’t yet”.
AI is, by definition these days, a future technology. We think of AI as science fiction so when it becomes reality we just kick the can in the definition.