CUDA is a proprietary platform that (officially) only runs on Nvidia cards, so making projects that use CUDA run on non-Nvidia hardware is not trivial.
I don’t think the consumer-facing stuff can be called a monopoly per se, but Nvidia can easily force proprietary features onto the market (G-Sync before they adapted VESA Adaptive-Sync, DLSS etc.) because they have such a large market share.
Assume a scenario where Nvidia has 90% market share and Nvidia cards would still only support adaptive sync via their proprietary G-Sync solution. Display manufacturers will obviously want to tailor to the market, so most displays will release with support for G-Sync instead of VESA Adaptive-Sync. 9 out of 10 customers will likely buy a G-Sync display as they have Nvidia cards. Now everyone has a monitor supporting some form of adaptive sync. AMD and Nvidia release their new GPU generation and isolated (in this hypothetical scenario), AMD cards are 10% cheaper for the same performance and efficiency as their Nvidia counterparts. The problem for AMD here is that even though per $ they have the better cards, 9 out of 10 people would need new displays to get adaptive sync working with an AMD card (because their current display only supports the proprietary G-Sync), and AMD can’t possibly undercut Nvidia by so much that the customer can also buy a new display for the price difference. This results in 9 out of 10 customers going for Nvidia again.
To be fair to Nvidia, most of their proprietary features are somewhat innovative. When G-Sync first came out, VESA Adaptive-Sync wasn’t really a thing yet. DLSS was way better than any other upscaler in existence when it released and it required hardware that only Nvidia had.
But with CUDA, it’s a big problem. Entire software projects that just won’t (officially) run on non-Nvidia hardware so Nvidia is able to charge whatever they want (unless what they’re charging is more than the cost of switching to competitor products and importantly porting over the affected software projects).
CUDA is a proprietary platform that (officially) only runs on Nvidia cards, so making projects that use CUDA run on non-Nvidia hardware is not trivial.
I don’t think the consumer-facing stuff can be called a monopoly per se, but Nvidia can easily force proprietary features onto the market (G-Sync before they adapted VESA Adaptive-Sync, DLSS etc.) because they have such a large market share.
Assume a scenario where Nvidia has 90% market share and Nvidia cards would still only support adaptive sync via their proprietary G-Sync solution. Display manufacturers will obviously want to tailor to the market, so most displays will release with support for G-Sync instead of VESA Adaptive-Sync. 9 out of 10 customers will likely buy a G-Sync display as they have Nvidia cards. Now everyone has a monitor supporting some form of adaptive sync. AMD and Nvidia release their new GPU generation and isolated (in this hypothetical scenario), AMD cards are 10% cheaper for the same performance and efficiency as their Nvidia counterparts. The problem for AMD here is that even though per $ they have the better cards, 9 out of 10 people would need new displays to get adaptive sync working with an AMD card (because their current display only supports the proprietary G-Sync), and AMD can’t possibly undercut Nvidia by so much that the customer can also buy a new display for the price difference. This results in 9 out of 10 customers going for Nvidia again.
To be fair to Nvidia, most of their proprietary features are somewhat innovative. When G-Sync first came out, VESA Adaptive-Sync wasn’t really a thing yet. DLSS was way better than any other upscaler in existence when it released and it required hardware that only Nvidia had.
But with CUDA, it’s a big problem. Entire software projects that just won’t (officially) run on non-Nvidia hardware so Nvidia is able to charge whatever they want (unless what they’re charging is more than the cost of switching to competitor products and importantly porting over the affected software projects).