![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.zip/pictrs/image/fbe4883b-64d2-4dbf-953d-789e884f5d6b.webp)
TL;DR; Our country is fucked. SCOTUS just made the unelected, unaccountable, oligarchy of judges the top law of the land, nationwide.
TL;DR; Our country is fucked. SCOTUS just made the unelected, unaccountable, oligarchy of judges the top law of the land, nationwide.
Companies don’t want us serfs to own anything.
“At least…”
I feel like the 15% number is very, very low.
Good call. The software has a looooooong way to go before it’s anywhere close to needing new hardware… which is also already as good it gets for VR/XR/AR. Any gains would be tiny without the software to take advantage of it.
Aww. I guess that means no more Solium Infernum content/DLC. 😞
That game is fantastic. I’d happily buy anything they put out.
I’m in California and I’ve been NPA (No Party Affiliation) for almost my entire voting-age life. So NPA is an option… at least in some states.
It’s really good these days. Kagi is still the beat, but if you want free, DDG is the way to go.
I don’t just mean DLSS or frame generation as it exists today… I mean completely re-interpreting what is rendered before it’s displayed with complete temporal and deterministic consistency. Given that we’ve seen some demos of the concept in action, and that was over a year ago, I really don’t think it’s far off, either.
Imagine booting up classic Monkey Island, and Nvidia’s AI reinterpreted makes it look like a high-end modern animated TV show. That’s the kind of thing I’m talking about.
It’s a matter of time before we can use real-time AI upscalers. Nvidia (et al) have been working on this quite a bit.
In the mean time, the two options you mentioned are it.
Oh no, how dare you point out Genocide Joe’s blood-drenched hands! Don’t you know that holding Biden accountable for his mistakes and malfeasances is literally the same as handing Orange Mussolini the presidency on a silver platter?
/s
Did that really need to take almost 2 decades to implement, Apple? 😬
I always narrow my eyes when I hear someone talk about “safety” in the context of AI, because they usually just mean that the AI doesn’t engage in enough moral grandstanding when you ask it sketchy or risqué questions. That’s the same level of pearl-clutching that Tipper Gore espoused over music in the 90s.
But there are legitimate concerns, like lying about real people and topics, reproducing training data (especially personal information) too closely with the right kind of prompting, etc. The problem is that I can’t tell what kind this person is. Are they upset because the AI can recommend marijuana strains… or because it can do something like leak peoples personal information? The article (and people involved in these efforts) too often lump it all together. See, for example: Anthropic
Now, all of that said, OpenAI is suuuper creepy. The way they started as a non-profit and then somehow managed to add a for-profit component… that is not acceptable and it’s disgusting that it’s allowed. It makes everything they do suspect and I’m inclined to believe what this exiting researcher says.
And tomatoes. Tomatoes used to be amazing. Even the worst ones were amazing.
Now they just taste like “wet”. If you want a good tomato you have to track down lovingly and carefully bred heirloom plants and grow them yourself.
All those different kinds of banana. All we get is cabendish which is, like, the worst of all the amazing banana varieties.
They’re readily available in the LA area. You just need to visit an asian specialty market.
I don’t consider myself Christian purely because of the word’s connotation, but I am absolutely on board with everything Yeshua taught… including full-blown pre-Marx communism.
Became? Always was… even at the end of WW2. Albert Einstein, who was Ashkenazi Jewish himself, even opposed it. Taking away a native populace’s land and giving it over to outsiders has always been, and always will be, controversial.
The problem is that they are murdering their kids. If they wanted to off themselves, that’s one thing, but inflicting it on helpless children?
Then every time their commercial AI outputs anything even remotely infringing, they should be on the hook for every. single. incident. by. every. user. every. time.