The system must be self aware
That’s a long, long way off. What you’re describing is what is often called “Artificial General Intelligence (AGI),” or sometimes “Strong AI.” That’s not how most people currently define AI.
Right now (to the best of my knowledge) there are no actual AI systems.
And that’s part of the reason most people in the field distinguish AGI from AI.
Everything that is being called AI is just complex programs that are following EXACTLY what the programming has set in stone with no deviations.
Hmmm, that’s problematic. All AI are programs, and all programs execute as they were written to do. For there to be flexibility, it has to be written into the code. You could argue than LLMs have a lot of flexibility written into them. If I ask one to tell me a story about an adolescent bee who is embarrassed because she has no stinger, it’s going to do that. It’s not like the LLM authors wrote a bee story into the code in case someone asked for one. But that also doesn’t make them self aware. You might also ask yourself how we’ll know if one is self aware.
LLM’s are a rather fascinating subject with lots of high level stuff that can be very daunting to understand. My laymans understanding is that generative systems such as Chat GPT are almost entirely probability based.
If I gave a LLM the following prompt: “in 2023 the current president of the United states is _____” fill in the blank. It would compare the prompt to its database of text and return Joe Biden with a 95% probability, Donald Trump with 4%, and some other random stuff for the remaining 1. It doesn’t actually KNOW what the prompt is and it doesn’t reason the answer. It’s only comparing what is in its database vs what’s been given to it.
This is why the larger the database it has the better answers it can return. True AI would be able to give you responses to anything having only been fed the contents of the dictionary. It could then reason what each word means and creatively form them into legible responses.
If you are interested in a very cool semi-interactive explanation of how generative systems work check out this website. I found it very fascinating.
Hmmm, that’s problematic. All AI are programs, and all programs execute as they were written to do. For there to be flexibility, it has to be written into the code. You could argue than LLMs have a lot of flexibility written into them. If I ask one to tell me a story about an adolescent bee who is embarrassed because she has no stinger, it’s going to do that. It’s not like the LLM authors wrote a bee story into the code in case someone asked for one. But that also doesn’t make them self aware. You might also ask yourself how we’ll know if one is self aware.
LLM’s are a rather fascinating subject with lots of high level stuff that can be very daunting to understand. My laymans understanding is that generative systems such as Chat GPT are almost entirely probability based.
If I gave a LLM the following prompt: “in 2023 the current president of the United states is _____” fill in the blank. It would compare the prompt to its database of text and return Joe Biden with a 95% probability, Donald Trump with 4%, and some other random stuff for the remaining 1. It doesn’t actually KNOW what the prompt is and it doesn’t reason the answer. It’s only comparing what is in its database vs what’s been given to it.
This is why the larger the database it has the better answers it can return. True AI would be able to give you responses to anything having only been fed the contents of the dictionary. It could then reason what each word means and creatively form them into legible responses.
If you are interested in a very cool semi-interactive explanation of how generative systems work check out this website. I found it very fascinating.
https://ig.ft.com/generative-ai/