• Almrond@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    edit-2
    2 months ago

    Hey, I have worked on this exact machine before, neat to see they are finally decommissioning it. It would be a terrible purchase to actually use these days though, for the cost of moving and deploying it you could rock a few Hopper or Grace clusters that would outperform the cluster for less than half of the operating overhead.

    I fully expect it to get parted out, the actual components would be far more useful on their own as cheap homelab systems, and would be a much better ROI versus using it as is. This thing is water cooled, just the plumbing would be a nightmare to deal with if you aren’t set up for it, and if you are you would be better off going with a modern architecture anyway.

      • Almrond@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        We were running meteorological models mostly, but I did have a colleague that was trying to use it to predict wildlife migratory patterns using topographical mapping. It was batched out on a few projects at any given time while I was there, it was essentially timeshares between a few different research departments.

  • mox@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    50
    ·
    edit-2
    2 months ago

    Power consumption: 1.7 MW

    I hope it stays decommissioned. We’re burning up the planet too fast already, and old computers tend to be far less efficient than modern ones.

    • SeaJ@lemm.ee
      cake
      OP
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 months ago

      Pop up a solar farm and you are good to go, baby!

      • Cort@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        Yeah just need 10Mw+ of solar and like 40mwh of batteries to power it 24/7

  • Maxnmy's@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    2 months ago

    The specs seem to be just enough to run a Minecraft server that doesn’t freeze when one player explores new chunks.

  • BilboBargains@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    It’s kind of lame that they need to junk the entire apparatus after only a decade. I get that processor technology moves on apace but we already know it does that so why doesn’t a universal architecture exist where nodes can be added at will?

    • Almrond@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 months ago

      It’s more of an operating cost issue. It’s almost decade-old hardware. It was efficient in its day, but compared to new hardware it just costs so much to run you would be better served investing in something with modern efficiency. It won’t be junked, it will be parted out. If you are someone that wants a cheap homelab with infiniband and shitloads of memory you could pick up a blade for a fraction of what it would otherwise cost. I fully expect it to turn into thousands of reasonably powerful servers for the prosumer and nerd markets instead of running as a monolithic cluster.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 months ago

      If you have too many “slow” modes in a super computer you’ll hit a performance ceiling where everything is bottle necked by the speed of things that are not the CPU: memory, disk for swap, and network for sending partial results across nodes for further partial computing.

      Source: I’ve hang up too much around people doing PhD thesis in these kinds of problems.

      • BilboBargains@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I would imagine it’s very difficult to make a universal architecture but if I have learnt anything about computers it’s that the manufacturers of software and hardware deliberately created opaque and monolithic systems, e.g. phones. They cynically insert barriers to their reuse and redeployment. There’s no profit motive for corporations to make infintitely scalable computers. Short sighted greed is a much more plausible explanation.

        • trolololol@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          When you get to write and benchmark your own code you’ll see technology has limits and how they impact you.

          You can have as many raspberry pis as you want, and accomplish faster computation if you can use the same budget with Xeon on dozens of MB in cache and hundreds of gb in ram with gigabit network cards.

          10 years from now these Xeon will be like rpi compared to the best your money can buy.

          All of those things have to fit in a building, not a desk. The best super computers look like Google’s data centers, but their specific needs dictate several tweaks done by very smart people. Super computers are supposed to solve 1 problem with 1 set of data at a time, not 100 problems with 1000,000 data set/people profiles at a time which are much easier to partition and assign to only 1000th of your data center at a time.