The study tracked around 800 developers, comparing their output with and without GitHub’s Copilot coding assistant over three-month periods. Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.

  • AnarchoSnowPlow@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    I’ve tried it for even some boiler plate code a few times. I’ve had to end up rewriting it every time.

    It makes mistakes like Junior engineers, but it doesn’t make them in the same way that junior engineers do, which means that as a senior engineer it takes me significantly more effort to review. It also makes mistakes that humans don’t, which is even weirder to catch in review.

    • leisesprecher@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Also my experience. It sometimes tries to be smart and gets everything wrong.

      I think code shows clearly, that LLMs don’t actually understand what’s written. Often enough you can clearly see it trying to insert a common pattern even though that doesn’t make sense at this point.

    • TrippaSnippa@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      As a junior-to-mid-level developer I find myself having to rewrite the boilerplate code copilot comes up with as often as not, or it will get things slightly wrong that I then have to go back and fix. I’m starting to think that most of the time it would be just as quick for me to just write it all myself.