• realharo@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    This is the most obvious outcome ever. How could anyone not see this coming given the constant AI improvements?

    Though good prompts can still make a big difference for now.

  • WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    This is dumb. Literally nothing has changed. Anyone who knows anything about LLM’s knows that they’ve struggled with math more than almost every other discipline. It sounds counter intuitive for a computer to be shit at math, but this is because LLM’s “intelligence” is through mimicry. They do not calculate math like a calculator. They calculate all responses based on a probability distribution constructed from billions of human text inputs. They are as smart, and as fallible, as wikipedia + reddit + twitter, etc, etc. They are as fallible as their constructing dataset.

    Think about how ice cream sales correlate with drownings. There is no direct causality, but that won’t stop an LLM from seeing the pattern or implying causality, because it has no real intelligence and doesn’t know any better.

    “Prompt engineering” is about understanding an LLM’s strengths and weaknesses, and learning how to work with them to build out a context and efficiently achieve an end result, whatever that desired result may be. It’s not dead, and it’s not going anywhere as long as LLM’s exist.

  • NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products.

    Who is hiring all these prompt engineers? Who is ‘scrambling’ to find people for this? The jobs I do see have basically replaced “developer” with “prompt engineer” with the same job requirements.