This is blatantly false. Classification tasks like this all have a level of certainty for each possible category - it’s just up to the person writing the software to interpret those levels of certainty in a way that’s useful to the user. Whether this is saying “I don’t know” when the certainties are too spread out, or providing a list of options like other people in this thread have said their apps do. The problem is that “100% certainty” comes off well with the general public, so there’s a financial incentive to make the system seem more certain than it is by using a layer (from memory it’s called Softmax?) that will return only the category with the highest degree of certainty.
uhhh do you have any clue how it actually works? i mean maybe there’s some sort of visual AI tech that doesn’t let you make it say “idk fam” but the standard stuff just gives a point value to each result, and you could just… have a minimum limit…
and like i’m pretty certain the current chatbots available generally are capable of responding that they don’t know, they’re certainly capable of “recognizing” when it’s a topic they’re not allowed to talk about.
It’s a fundamental problem with the tech in general. It inherently has no concept of “I don’t know” and will just be confident, specific, and wrong.
That’s blatantly untrue. My plant ID app gives multiple suggestions with certainty percentages.
What’s your plant ID app?
inaturalist does this, and also lets other people suggest an ID so you can get a consensus.
PlantNet
My app does this too!
Feeling like half these commenters hating on this feature use one bad program and think the whole concept is bad.
This is blatantly false. Classification tasks like this all have a level of certainty for each possible category - it’s just up to the person writing the software to interpret those levels of certainty in a way that’s useful to the user. Whether this is saying “I don’t know” when the certainties are too spread out, or providing a list of options like other people in this thread have said their apps do. The problem is that “100% certainty” comes off well with the general public, so there’s a financial incentive to make the system seem more certain than it is by using a layer (from memory it’s called Softmax?) that will return only the category with the highest degree of certainty.
uhhh do you have any clue how it actually works? i mean maybe there’s some sort of visual AI tech that doesn’t let you make it say “idk fam” but the standard stuff just gives a point value to each result, and you could just… have a minimum limit…
and like i’m pretty certain the current chatbots available generally are capable of responding that they don’t know, they’re certainly capable of “recognizing” when it’s a topic they’re not allowed to talk about.