• Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 months ago

    I have a theory hypothesis notion that the concept of hallucination in artificial neural networks is not a failure mode that is unique to ANNs but is an inherent property of any neural network, artificial or biological.

    Essentially, I posit that a neural network by itself is incapable of maintaining coherence without a rigid external framework, such as consistent feedback in training an ANN, or the laws of physics for biologicals.

    This would explain why people start tripping balls in sensory deprivation chambers. And it provides a counterargument to any thought experiment or philosophy that involves a disembodied brain vividly hallucinating reality.

    • radix@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      That’s really interesting! I guess I’ll incorporate this into my worldview now.

    • kautau@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Ironically not too far from Aldous Huxley’s theory about human perception:

      https://en.wikipedia.org/wiki/Mind_at_Large

      Essentially reality is or contains all the properties of hallucination but our brain filters it, and psychedelic drugs in some way dilute or remove that filter.

      So the human brain is sort of by default filtering the “hallucination” version of thought until we open that up, and ANNs begin with that at baseline, and then require rigor added to them to reduce the “hallucination”

    • muntedcrocodile@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Its either a counter argument or the best backup for a disembodied brain hallucinating everything.