MTGZone.com for all things Magic the Gathering
!MTG@mtgzone.com is the main one but !Spoilers@mtgzone.com is hopping with Duskmourn reveals right now
Edit: also !art@mtgzone.com if you just want to see some pretty pictures
MTGZone.com for all things Magic the Gathering
!MTG@mtgzone.com is the main one but !Spoilers@mtgzone.com is hopping with Duskmourn reveals right now
Edit: also !art@mtgzone.com if you just want to see some pretty pictures
The art for this was always so sick, even if the game didn’t live up to the expectations
Ahhh, a friend had the GameCube version when I was younger and I knew about Link but not the other two.
Spawn?
I can only assume that the issue is that they’re trying to reduce the number of calls to the original instance. If you’re just scrolling by, you only see the post that’s cached on your own server, and it doesn’t communicate with the original instance until you open the post. Making it so that every time some scrolls by a post it contacts the original instance sounds like it massively increases the amount of traffic to the original instance which goes against the idea of software that supports smaller, self or community hosted servers.
But simply knowing the right words to say in response to a moral conundrum isn’t the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don’t respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.
This brings about worries that an AI might just be “convincingly bullshitting” about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM’s moral evaluations even if and when that AI hallucinates “inaccurate or unhelpful moral explanations and advice.”
Despite the results, or maybe because of them, the researchers urge more study and caution in how LLMs might be used for judging moral situations. “If people regard these AIs as more virtuous and more trustworthy, as they did in our study, they might uncritically accept and act upon questionable advice,” they write.
Great, so the headline of the article directly feeds into the issue the scientists are warning about when it comes to public perception of AI morality
Gorgeous! One of those games I have in my library and need to play
It’s kill or be killed
Google has been killing those off for a while. Nowadays it’s hard to find anything that isn’t just the copy-pasted SEO bait non-articles covered in ads
Hell yeah dude
Wish this AI bubble would burst already
Posted in the fandom thread but MTGZone.com is around if you enjoy Magic the Gathering
!MTG@mtgzone.com - main community
!Spoilers@mtgzone.com - new card reveals
!art@mtgzone.com - MTG artwork
There are also format specific communities for Standard, Modern, Pauper, Commander, etc… but they don’t see as much use