cross-posted from: https://lemmy.sdf.org/post/29335261
cross-posted from: https://lemmy.sdf.org/post/29335160
Here is the original report.
The research firm SemiAnalysis has conducted an extensive analysis of what’s actually behind DeepSeek in terms of training costs, refuting the narrative that R1 has become so efficient that the compute resources from NVIDIA and others are unnecessary. Before we dive into the actual hardware used by DeepSeek, let’s take a look at what the industry initially perceived. It was claimed that DeepSeek only utilized “$5 million” for its R1 model, which is on par with OpenAI GPT’s o1, and this triggered a retail panic, which was reflected in the US stock market; however, now that the dust has settled, let’s take a look at the actual figures.
…
You are right, it indeed doesn’t quality under OSI definition. I wasn’t aware they didn’t share the code for training the model. My bad on assuming they did, based on the public GitHub repo.
Even then, it’s still the most open commercial model out there that rivals anything US Big Tech managed to come up with using their unlimited budget. There is no diminishing that. Lack of training code only affects other companies with enough resources to build it. It’s a huge win for consumers and huge embarrassment for the US companies.
P.S. There isn’t such a thing as “not fully open source”. It either is or it’s not.