

Isn’t it too easy for the current chatbots/LLMs to lie about everything?
Train it on garbage or in the wrong way, and it will agree on anything you want it to.
I asked DeepSeek about what to visit nearby and to give me some URLs and it hallucinated the URLs and places. Guess it wasn’t trained to know anything about my local area.
I