For years now, I do not buy/create assemble a new computer, because I am totally overwhelmed by the options available to me.

If we agree there is ‘The Paradox of Choice’, it seems to make sense to have a much more limited choice between CPU models from a consumer point of view. For example, have for each year an entry, business and a pro model, add extreme for gamer and have each of these models have a version with a beefy integrated CPU.

But it seems also a good idea for the manufacturers: They have to design, test and build each of their models, create advertisement etc., like configuring their assembly lines alone costs money. Further, compilers have to generate code for a specific architecture, which means that all my software I didn’t compile myself ends up using an instruction set of the lowest common CPU, not utilizing whatever I bought fully.

Apple (not a fan ;-)) shows IMHO how it is done with their Apple Silicon: Basically even I understand which CPU choice would be the right one for me. The Steam Deck is IMHO another success story: As reference hardware I know easily if I can play a game, and it is easy to know if my hardware is faster than a Steam Deck. Compare that to games with hardware requirements like ‘AMD TI 5800 8GB RAM’ (made up model) which makes my life miserable.

What I am looking for is fact based knowledge:

  • Why does it make (commercial) sense for AMD/Intel to create so many models?
  • What are their incentives?
  • What would happen, if they would reduce the amount of different CPUs they offer? (Is there historical knowledge?)
  • hperrin@lemmy.world
    link
    fedilink
    arrow-up
    53
    ·
    edit-2
    2 months ago

    To a certain extent, processors that only partially work can be sold as a different model. For example, if AMD makes a bunch of 8 core processors, they might be sold as a number of different models based on what, if anything, is wrong with them. The best ones where everything works will be sold as the highest model. Then the ones that can’t achieve the highest clocks, but all the cores and the iGPU works will be sold as the next highest model. Then the ones where a couple cores are bad will be sold as 6 core models, and so on.

    They’re made with, essentially, toggle jumpers that can be “cut” to disable different parts of the CPU (I think this is in the chip’s internal firmware and not an actual hardware cut). So two CPUs with the same SKU might have different internal cores that are enabled/disabled. The SKU is basically just a guarantee of how many cores it has, what frequency they will run at, same for iGPU cores, IO capability, and cache.

    Obviously, this only applies to CPUs with compatible die sizes. That’s determined by the package it’s ultimately going to be mounted on.

    It’s called binning, and it helps achieve higher yields by allowing you to sell more of the “defective” units. The more variations they have, the more they can get for each chip on average. Like if they have a SKU with 8 cores and 32 PCIe lanes, and a SKU with 6 cores and 24 PCIe lanes, then a chip where all 8 cores work, but only 30 of the PCIe lanes work, would have to be sold as the cheaper 6 core SKU. Adding another SKU with 8 cores and only 24 PCIe lanes, would let them price that same chip higher.

    https://www.techspot.com/article/2039-chip-binning/

    One of the coolest applications of this I’ve seen is the CPU AMD released recently which was literally a PS5 CPU with the graphics disabled (because it didn’t work, or didn’t work well enough to put in a PS5).

  • SMillerNL@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    2 months ago

    Why does it make (commercial) sense for AMD/Intel to create so many models?

    Because there is demand for various types of systems. And on top of that, if you make a chip with 8 cores and two are defective… just sell a 6 core chip instead of throwing it away.

    What are their incentives?

    Money

    What would happen, if they would reduce the amount of different CPUs they offer? (Is there historical knowledge?)

    They would lose customers to competitors in that space. When AMD didn’t make EPYC chips, all servers were Intel Xeon.

  • weeeeum@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Some of it comes from “binning” of chips. Despite our technology, processor manufacturing is kind of a gamble, the number of transistors a chip will have will be somewhat random, and the performance will vary. They will then sort and separate these processors by speed, “binning”.

    That’s why you see CPU models that are nearly identical to each other but vary slightly in speed.

    Plus, companies love money. They will make a product for every possible conceivable market. Say if somebody doesn’t want to spend 200$, but can afford something greater than 150$, there will be a CPU for that gap.

    Then different workloads require different types of CPUS. Single core applications need high clock speed, protein folding and needs many, slower cores and servers need processors that prioritize stability.

  • Hucklebee@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    The biggest pet peeve I have with modern hardware versions is that in a laptop, the same hardware performs similair to a lower version of a desktop variant.

    Don’t call it a 3070TI if it performs like a 3060TI. Just call it a 3060 TI from the start.