• Ultraviolet@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    4 months ago

    It’s a fundamental problem with the tech in general. It inherently has no concept of “I don’t know” and will just be confident, specific, and wrong.

    • Hagdos@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      That’s blatantly untrue. My plant ID app gives multiple suggestions with certainty percentages.

    • Darohan@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      This is blatantly false. Classification tasks like this all have a level of certainty for each possible category - it’s just up to the person writing the software to interpret those levels of certainty in a way that’s useful to the user. Whether this is saying “I don’t know” when the certainties are too spread out, or providing a list of options like other people in this thread have said their apps do. The problem is that “100% certainty” comes off well with the general public, so there’s a financial incentive to make the system seem more certain than it is by using a layer (from memory it’s called Softmax?) that will return only the category with the highest degree of certainty.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      uhhh do you have any clue how it actually works? i mean maybe there’s some sort of visual AI tech that doesn’t let you make it say “idk fam” but the standard stuff just gives a point value to each result, and you could just… have a minimum limit…

      and like i’m pretty certain the current chatbots available generally are capable of responding that they don’t know, they’re certainly capable of “recognizing” when it’s a topic they’re not allowed to talk about.