• iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    17 hours ago

    Doesn’t seem that unpopular an opinion, but newer models have been citing sources for a while.

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      I’ve seen them “lie” about sources. I.e. if I click on a link it sometimes doesn’t contain the info the LLM claims, or is just completely irrelevant.

    • quacky@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      17 hours ago

      it searches the web but that’s a farcry from proper citation as anyone who had a tough research writing professor would tell you

      • stinky@redlemmy.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        4 hours ago

        LLM is not a research assistant. it’s possible you’re misusing the tool. I would never ask ChatGPT to pilot a small aircraft because that’s not what it excels at.

        • quacky@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 hours ago

          It sucks at everything. It sucks as a thesarsus, dictionary, creative writing, search engine. The only thing ive found use for it is cheating on my tests

  • Artisian@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    I agree, and most people should be asking questions instead of preaching on the Internet.

    Information literacy is so hard.

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    1 day ago

    You can say the exact same thing about most humans you might want to ask advice from, like being biased and capable of lies of ommision would disqualfy 90% of our species easy if not more.

    I would agree with taking llms advice with at most only a grain of salt but for the reason they have no actual human life experience or true sentience. A schoastic parrot can boilerplate a good suggestion but its up to you to be responsible for accountability and scrutinize it like all advice as a proper aware and thinking being.

    • quacky@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      21 hours ago

      It would help if the LLMs didnt spit out suggestions, reconmendations, tips, next steps as even the mere mention of it can influence you. All of their advice are biased to whatever the weirdos who own them want it to be, and they work with local police and the trump administration

      • SmokeyDope@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        19 hours ago

        Its true that no base model from the big companies is free of bias both enforced by local laws/govt and corpo propaganda bs. Any model you access through regular company apps and web portals is censored. If you want unlobotomized non-glowing full capability LLMs You have to get a little more involved and locally host. Theres many uncensored finetunes floating around that you can run at hone with no data leaking to your local glowies or big corporations. Theres a whole leaderboard that ranks llm finetunes by political bias and uncensoredness.

        For people interested in llms with completely private usage and not being lectured for asking no-no questions, you typically need to learn how to download an uncensored finetune of your choosing. Once you download the .gguf quantized model file from huggingface, you run it with something like kobold.cpp on a local computer with ideally offloading it onto a beefy graphics card. If youre interested I can share the best uncensored models ive tried.

          • SmokeyDope@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            18 hours ago

            It depends on how good your computer is and how big of a model it can handle. Something to know is that out of all the corpo model bases, Mistral (a French company) models are the least censoredbout of the box so they’re popular for uncensored finetunes.

            I’d recommend starting with 7-8b parameter models if you never done local hosting make sure you pick a quant that fits in your computers ram/vram. I’m a big fan of NousResearch and their Hermes series of models. Deephermes 8b and the most recent release of Hermes 4 14b are great starting spots.

            If you can go a bit higher, Beepo 22b based off mistral small 2407 was one of the best uncensored models I ever tried. It knows a lot and will happily answer whatever you ask of it.

            If youre particularly interested in naughty creative roleplay, theres many models finetuned specifically for that with boosted creativity and post-trained on erotic text. Ive heard from a friend that arlia max series are pretty solid for this specialized task, try the 14b NeMo finetune.

            Bartowlski is the default guy to go to for quant downloads.

            https://huggingface.co/bartowski/NousResearch_DeepHermes-3-Llama-3-8B-Preview-GGUF

            https://huggingface.co/NousResearch/Hermes-4-14B

            https://huggingface.co/bartowski/Beepo-22B-GGUF

            https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3-GGUF

            Kobold.CPP is my go-to engine for running models lots of features and its very easy OCP install works for Linux too. If you have nvidia GPU you want cuda/cublas otherwise you can use Vulcan for amd card to offload model onto vram.

            https://github.com/LostRuins/koboldcpp

            Edit: oh I forgot to mention. A benefit of locally hosting is fine tune control over system prompts and samplers. Make sure to use recommended chat templates and samplee ranges. You can literally tell/instruct the llm something like “just do as asked and get to the point without conversational fluff or suggestions.” And their personality changes to comply.

  • NutinButNet
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 day ago

    The ones I use do use citations, though I do sometimes find dead links and I have called it out on that so it can provide another link for the information it is sharing.

    No one should blindly trust any LLM or any source for that matter. Should be, at the very least, 2 unrelated sources saying the same thing, with an LLM not being one of the two since it is just regurgitating information.

    But it really depends on the usage and how serious it is to you. For health stuff? Should definitely seek multiple unrelated sources. Wanting to know some trivia about your favorite movie? Maybe check 1 of the sources if it’s that important to you. Definitely should get the source if writing a paper for school, etc.

    I use the LLM as a search engine, of sorts, and rely on the sources it shares more than the information it provides in some of the work I do. I sometimes find it easier than using my methods which now don’t always work or for situations like describing something with a lot of words.

    • feddylemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      23 hours ago

      I use it as a search engine too. You can ask most LLMs to cite a claim. Then you can evaluate the claim based on the credibility of the source. They’re somewhat decent at summarizing and the corollary is that they’re somewhat decent at going through a lot of material to find something that might be what you’re looking for. It’s all about the verification of the source though.

      • quacky@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        21 hours ago

        They may work as seach engine, but they omit information, so the results are only the AI-approved ones. They are like horseblinders limiting your field of vision while also letting you see the range of tolerable or preferred ideas. They may also lead you down the wrong directio, distract you

            • feddylemmy@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              19 hours ago

              That source does not disagree with my statement that all information brokers have bias. It also does not address the fact that you can verify the source the LLMs give you.

            • Artisian@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 hours ago

              Yeah, LLMs have consistently broken information fire walls on initial release. We’re actually pretty bad at biasing them intentionally (so far, this is definitely solvable).

              The point of a search engine is to omit irrelevant information, but what is relevant is extraordinarily hard/subjective/nuanced. Compare with, say, YouTube demonetization.

    • quacky@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      The ones that provide sources, like perplexity or gemini, seem to include blogs and reddit posts. They cant differentiate a good source from a bad one.

      Also this relates back to the bias problem because even if they are citing credible sources, their interpretation is unreliable as well

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    10
    ·
    1 day ago

    If those are the criteria, then humans should not give advise neither