When
- the expected return becomes negative, or
- the risk/return ratio moves away from the efficient frontier with no other motivating factor.
When
Other than the fediverse support, are there any other differences from last year?
The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be mostly used for more controversial applications, when companies want to distance themselves from the potential output by pretending that their software tools have independent agency.
Agreed—and to be clear, I’m not advocating for self-driving lanes. But I think one of the potential motivations for the creation of such lanes is that human drivers would feel more comfortable if they weren’t sharing lanes with self-driving cars, just like they feel more comfortable not sharing lanes with buses. And by the same token, bus drivers and self-driving cars aren’t going to want to share lanes with each other, so there would be pressure to have different lanes for each type of traffic.
The difference with buses is that they’re less safe (or at least less able to avoid collisions) at high speed than cars are. So the purpose of bus lanes isn’t to increase the maximum speed of buses, but to increase their minimum speed during congestion.
If self-driving cars got to the point where they were significantly safer than human drivers (a big if), I could see the creation of dedicated self-driving lanes with higher speed limits.
What if the judge loses the libel case? The defendant could then argue the “unfounded” libel charge was symptomatic of a preexisting bias.
I’m guessing that could give the defendant ground to appeal the original ruling because the judge was biased by the alleged libel.
In the long run, humans whose biologies most closely resemble rats will have better medical outcomes and therefore greater evolutionary fitness.
What about the usage demographics within each country?
In underdeveloped/exploited countries, internet usage is more likely to be concentrated among the economic elites who formerly benefited from colonialism—so if increasing adoption in those countries just follows the pattern of other internet use, it could have the opposite effect from the one intended.
Compression algorithms can reduce most written text to about 20–25% of its original size—implying that that’s the amount of actual unique information it contains, while the rest is predictable filler.
Empirical studies have found that chimps and human infants, when looking at test patterns, will ignore patterns that are too predictable or too unpredictable—with the sweet spot for maximizing attention being patterns that are about 80% predictable.
AI researchers have found that generating new text by predicting the most likely continuation of the given input results in text that sounds monotonous and obviously robotic. Through trial and error, they found that, instead of choosing the most likely result, choosing one with around an 80% likelihood threshold produces results judged most interesting and human-like.
The point being: AI has stumbled on a method of mimicking the presence of meaning by imitating the ratio of novelty to predictability that characterizes real human thought. But we know that the actual content of that novelty is randomly chosen, rather than being a deliberate message.
So they talk about this as if it were a new innovation at the time—but could it be that this kind of woodworking was more widespread and this was just the only example to survive? Could it have been a standard part of the Acheulian toolkit?
Wiktionary runs on MediaWiki—does that have the sort of functionality you need?
Yeah—the harness they had Marques wear was probably in part to make sure he didn’t fall over and touch his clothes or hands to the disks.
That was just an example—it might also be a problem for shoes with heels, or textured soles, or people with feet too small to cover enough disks at once.
Seems like it would only work for objects with large, flat bottoms—if you tried to use it barefoot it would likely rip your toes off.
Whatever confusion the metaphor may have caused in the minds of the public, I don’t think the solution is to ask neuroscientists to deliberately misrepresent their research—or to impose on themselves metaphorical language aimed at influencing policy rather than aiding scientific understanding.
I can think of a few reasons a translator might choose to do that:
The original author was using language that was old-fashioned in their time (e.g., a medieval Latin writer imitating Cicero, or a Hellenistic Greek writer imitating Thucydides)
The work in question had its greatest historical impact long after its original composition, so its language would have seemed archaic to the relevant readers (e.g., the Vedas, Avestas or Analects)
The translator is trying to maintain consistency with canonical translations of related works done long ago (e.g., translating early Christian writings in the King James style)
The translator wants to create a general sense of cultural distance, if placing the culture of the original work in a modern context would be misleading
It sounds like a rehash of Plato’s Euthyphro dilemma… which Smith must surely have been familiar with, yet never references?