I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?
I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.
Here’s a secret. It’s not true AI. All the hype is marketing shit.
Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Until a llm can understand why it is wrong we won’t have true AI.
It’s just a stupid probability bucket. The term AI shits me.
I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. It told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…
The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be mostly used for more controversial applications, when companies want to distance themselves from the potential output by pretending that their software tools have independent agency.
“recent AI developments”
so, you just want to talk about the current batch of narrow AI LLMs?
or are you open to all the graphics/video editing stuff? (Topaz’s quality is pretty amazing)
it’s a lot better than “is hotdog”.
it’s also slow.
remember, all these systems do is take a bunch of data in and guess until they get it right, then based on that, process more data and so on.
Have you ever read the story about the AI tank from the 90s?
short version of the story is: computer was fed a bunch of pictures. some with tanks, some without. after a while, it got great at identifying them.
when they tried it out with a tank, it kept shooting at trees.
turns out, all the pics with tanks were taken in the shade.
now, like I said: story.
but the point is, this is something that’s been worked on for decades. it’s a problem as big as teaching as it is how to teach.
so, to be clear: there are LOTS of “true uses”. the issue is “they aren’t ready yet”.
we’re just playing around with beta versions (effectively) while still being amazed at how far they’ve come.
Between OCR and LLM, summarising scanned things (something I do ~20% of the time) has about halved in terms of mental effort and time. As I’m paid on billable hours, this is big for me. I have told nobody and have not increased my overall output commensurately. This is the only good kind of automation I’ve observed: bottom-up, no decrease in compensation, no negotiations.
I tried FreedomGPT for better personal ownership, but for now, the hardware isn’t up to snuff for my needs. With stronger processing and somewhat better open source models I’ll be sitting pretty.
Recently I saw AI transcribe a YT video. It was genuinely helpful.
Current gen AI is pretty mediocre. It’s not much more than the bastard child of a search engine and every voice assistant that has been around for the last ten years. It has the potential to be a stepping stone to fantastic future tech, but that’s been true of tons of different technologies for basically as long as we’ve been inventing things.
AI is not good enough to replace the majority of workers yet. It summarizes information pretty well and can be helpful with drafting any sort of document, but so was Clippy. When it doesn’t know something it can lie confidently. Lie isn’t really the right word but I’ll come back to that concept in a second. Incorrect information is frustrating in most cases but it can be deadly when presented by a source that is viewed as trustworthy, and what could be more trustworthy than an AI with access to the collective knowledge of mankind? Well, unfortunately for us AI as we know it isn’t really intelligent and the databases they’re trained on also contain the collective stupidity of mankind.
That brings us back to the concept of lying and what I view as the fundamental flaw of current AI; namely that any sort of data interpretation can only be as good as the data it describes. ChatGPT isn’t lying to you when it says you can put glue on your cheese pizza, it’s just pointing out that someone who said that got a lot of attention. Unfortunately it leaves out all the context which could have told you that pizza would not be fit to consume and presents the fact that it was a popular answer as if that is the only thing that defines the best answer. There’s so much more that needs to be taken into account, so much unconscious human experience being drawn from when an actual human looks at something and tries to categorize or describe it. All of that necessary context is really difficult to impart to a computer and right now we’re not very good at that essential piece of the puzzle.
If we could assume that all datasets analyzed by AI were free from human error, AI would be taking over the world right now. However, that’s not the world we live in. All data has errors. Some are easy to spot but many are not. AI firms are getting companies to salivate at the idea of easy manipulation of data in one form or another. They aren’t worried about the errors in the data because they view that as someone else’s problem and the companies all think their data is good enough that it won’t be an issue. Both are wrong. That’s exactly why you hear a lot of talk about AI right now and not all that much practical application beyond replacing customer service reps, especially in the business world. Companies are finding out that years of bad practices have left them with a dataset full of errors. Can they find a way to get AI to correct those errors? In some cases yes, in others no. In either case the missing piece preventing a full scale AI takeover is all that human background context necessary for relevant data interpretation. If we find a way to teach that to an AI then the world is going to look vastly different than it does today, but we’re not there yet.
There is truth in statistics. The minor errors are irrelevant in the actual LLM. Problems like the bad reddit quotes by google have nothing to do with and actual LLM, that is a RAG (augmented retrieval) and just bad standard code. The model itself is learning statistical word associations across millions of instances of similar data. The minor errors are irrelevant in this context.
Generative tools posted online are trash in their controls and especially the depth of capabilities. If you play with an enthusiast level consumer machine, with ComfyUI, the full nodes manager (not just the comfy anonymous repo), and the hundreds of nodes, things change. I’ve spent the last week reading white papers, following code examples, and trying new techniques. The possibilities are getting exponentially complex in a short period of time. I think most people working on generative AI in the public space are turning inward at the moment because it is hard to grasp all the possibilities, or maybe I’m just not following the right people.
We are in a data grab phase where it is feasible to collect more data as opposed to refining what exists. I think the techniques are growing too fast to say what will be the most efficient way of refining data. Eventually a refinement phase is likely.
Hallucinations are not actually a thing. The reasons they happen are just too complex to explain to a consumer public or no one would use the tool. If you learn about alignment and you really start reading into the tokenizer code, you’ll learn that it is just a complex system where most errors are due to safety alignment. The rest are generalizations made for an average use case. The underlying capability is far more complex and nuanced than any publicly hosted stalkerware data mining operation might appear. These real capabilities of the LLM are the building blocks of change. There are many other systems than just the tensor tables and word relationship statistics.
https://en.m.wikipedia.org/wiki/File:Gartner_Hype_Cycle.svg
It’s not that helpfull as everybody thinks and slowly people are realizing that.
I think most of the media coverage is hype. That doesn’t directly answer your question… But I take everything I read with a grain of salt.
Currently, for the tech industry, it’s main use is to generate hype and drive the speculation bubble. Whether it’s useful or not, slapping the word “AI” on things and offering AI services increases the value of your company. And I personally think if they complain about this, it’s they want the bubble even bigger, but they already did the most obvious things. But that has nothing to do with “find use” in the traditional sense (for the thing itself.)
And other inventions came with hype. Like smartphones (the iPhone.) Everyone wanted one. Lots of people wanted to make cash with that. But still, if it’s super new, it’s not always obvious at what tasks it excels and what the main benefits are in the long term. At first everyone wants in just because it’s cool and everyone else has one. In the end it turned out not every product is better with an App (or Bluetooth). And neither a phone, nor AI can (currently) do the laundry and the other chores. So there is a limit in “use” anyways.
So I think the answer to your question: what did they have in mind… is: What else can we enhance with AI or just slap the words on to make people buy more. And to be cool in the eyes of our investors.
I think one of the next steps is the combination with robotics. That will make it quite more useful. Like input from sensors so AI can take part in the real world, not just the virtual one. But that’s going to take some time. We’ve already started, but it won’t happen over night. And for the close future i think it’s gonna be gradual increase. AI just needs to get more intelligent, make less errors, be more affordable to run. That’s going to be a gradual increase and provide me with a better translation service on my phone, a smart-home that i can interact with better, an assistant that can clean up the mess with all the files on my computer, organize my picture folder… But the revolution already happened. I think it’s going to be constant, but smaller steps/ progress from now on.
A buddy told me he used AI to mostly author a PowerShell script for something or other automation at his work the other day. Sounded like it was reasonably complex and all he had to do was sanity-check the code and touch it up to make sure it worked correctly. I’ve barely dabbled in that area, but I was reasonably impressed with the small tasks I threw at it.
Are you writing a term paper or business plan and looking for ideas? Lol cuz that’s how you get ideas. Data mining for business ideas – an AI function, actually…
AI is seeing a lot of uses. At my job were using it for Change Control and Requirements Analysis. Legal Discovery was already a thing for LLM models but this threatens to make contracts simpler to digest. My graduate advisor mentioned how she’s using it to grade papers.
These aren’t the same use cases as the average person, and may not be using the same products, platforms, or models.
AI seems to be for coders what the PC was for Designers.
We used to have a guy for type, a guy for colours, a copywriter, an art director, and a graphic designer. Now it’s all one guy whose responsible for everything start to finish.