“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”
Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.
I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch
Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.
What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.
While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.
The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable. Labeling this as “AI” is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.
People who condone the use of the term “AI” for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.
Can you name a company who has produced an LLM that doesn’t refer to it generally as part of “AI”?
can you name a company who produces AI tools that doesn’t have an LLM as part of its “AI” suite of tools?
How do those examples not fall into the category “snake oil vendor”?
what would they have to produce to not be snake oil?
Wrong question. “What would they have to market it as?” -> LLMs / machine learning / pattern recognition
Wouldn’t you just take issue with whatever the new name for it was instead? “Calling it pattern recognition is snake oil, it has no cognition” etc
Not this again… LLM is a subset of ML which is a subset of AI.
AI is very very broad and all of ML fits into it.
This is the issue with current public discourse though. AI has become shorthand for the current GenAI hypecycle, meaning for many AI has become a subset of ML.
For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.
You’re not going to replace a senior dev with it, of course, but it’s a great tool.
My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.