It could be argued that deepseek should not have these vulnerabilities, but let’s not forget the world beta tested GPT - and these jailbreaks are “well-known” because they worked on GPT as well.
Is it known if GPT was hardened against jailbreaks, or did they merely blacklist certain paragraphs ?
Its very hard to determine genuine analysis of deepseek because while we should meet all claims with scepticism, there is a broad effort to discredit it for obvious reasons.
Isn’t it fun watching the world self-immolate, despite all the fucking warnings in every sci-fi written in history?
We are in the PKD timeline, not the Asimov timeline.
Not from this technology, regardless of the hype behind it. The only dangers this technology presents are excessive carbon emissions, and if some idiot “true believer” implements this predictive text generator into some critical system where the algorithm can’t perform.
Neat!