Cowards left out Navajo.
Syllables can vary in length. Japanese has very short syllables while English has rather long ones. Counting phonemes would make more sense
Wonder how Thai is the zipfile of languages.
It is multiplexed with five tones and a variety of different registers to signify relationship, status, and variable interplay between the two based on situation.
- University Thai language learner, linguist, and professional Thai reading, writing, speaking in Thailand for several years
My very casual understanding is that grammatical structure or gender isn’t really a thing, or articles for that matter, making it very contextual and tonal language so a zipfile isn’t even a bad metaphor.
However, in this case it seems like the human brain is the default Windows zip program.
I always thought that English was an efficient language.
Switch to Rust. I speak Rust btw.
On arch
Nah NixOS
I am pretty skeptical about these results in general. I would like to see the original research paper, but they usually
- write the text to be read in English, then translate them into the target languages.
- recurit test participants from
USwestern university campuses.
And then there’s the question of how do you measure the amount of information conveyed in natural languages using bits…
Yeah, the results are mostly likely very skewed.
So I did a quick pass through the paper, and I think it’s more or less bullshit. To clarify, I think the general conclusion (different languages have similar information densities) is probably fine. But the specific bits/s numbers for each language are pretty much garbage/meaningless.
First of all, speech rates is measured in number of canonical syllables, which is a) unfair to non-syllabic languages (e.g. (arguably) Japanese), b) favours (in terms of speech rate) languages that omit syllables a lot. (like you won’t say “probably” in full, you would just say something like “prolly”, which still counts as 3 syllables according to this paper).
And the way they calculate bits of information is by counting syllable bigrams, which is just… dumb and ridiculous.
Alright, but dismissing the study as “pretty much bullshit" based on a quick read-through seems like a huge oversimplification. Using canonical syllables as a measure is actually a widely accepted linguistic standard, designed precisely to make fair comparisons across languages with different structures, including languages like Japanese. It’s not about unfairly favoring any language but creating a consistent baseline, especially when looking at large, cross-linguistic patterns.
And on the syllable omission point, like “probably” vs. “prolly," I mean, sure, informal speech varies, but the study is looking at overall trends in speech rate and information density, not individual shortcuts in casual conversation. Those small variations certainly don’t turn the broader findings into bullshit.
As for the bigram approach, it’s a reasonable proxy to capture information density. They’re not trying to recreate every phonological or grammatical nuance; that would be way beyond the scope and would lose sight of the larger picture. Bigrams offer a practical, statistically valid method for comparing across languages without having to delve into the specifics of every syllable sequence in each language.
This isn’t about counting every syllable perfectly but showing that despite vast linguistic diversity, there’s an overarching efficiency in how languages encode information. The study reflects that and uses perfectly acceptable methods to do so.
Well I did clarify I agree that the overarching point of this paper is probably fine…
widely accepted linguistic standard
I am not a linguist so apologise for my ignorance about how things are usually done. (Also, thanks for educating me.) But on the other hand just because it is the accepted way doesn’t mean it is right in this case. Especially when you consider the information rate is also calculated from syllables.
syllable bigrams
Ultimately this just measures how quickly the speaker can produce different combinations of sounds, which is definitely not what most people would envision when they hear “information in language”. For linguists who are familiar with the methodology, this might be useful data. But the general public will just get the wrong idea and make baseless generalisations - as evidenced by comments under this post. All in all, this is bad science communication.
But the general public will just get the wrong idea and make baseless generalisations - as evidenced by comments under this post. All in all, this is bad science communication.
Perhaps, but to be clear, that’s on The Economist, not the researchers or scholarship. Your criticisms are valid to point out, but they aren’t likely to be significant enough to change anything meaningful in the final analysis. As far as the broad conclusions of the paper, I think the visualization works fine.
What you’re asking for in terms of methods that will capture some of the granularity you reference would need to be a separate study. And that study would probably not be a corrective to this paper. Rather, it would serve to “color between the lines” that this study establishes.
There’s always Google Scholar.
This was one of the weirdest things I had to learn when I was learning spanish. The sounds are much faster but the information density was similar. For me as an english native speaker it felt like I was listening to a machine gun at first. Eventually I trained my ear and now both languages sound the same speed.
This is also why, to me, rapidly spoken natural Spanish and Japanese sound oddly similar if I hear it out of “the corner” of my ear, so to speak.
Which is funny cause I kinda speak Spanish lol
Spanish and Japanese use the same sounds. For the most part, anyway; there are probably a few exceptions. This was unexpected and utterly blew my mind as a native Spanish speaker when I took Japanese lessons.
Take the longest, most complicated Japanese word. Write it out in romaji (Latin letters). And ask a native Spanish speaker to pronounce it. One who knows nothing of Japanese. They’ll pronounce it pretty much correctly. I was fascinated.
I recently had a conversation with a native Spanish speaker who lived in Japan and spoke Japanese fairly fluently. He said the exact same thing, it was surprising how similar they can be in this regard
It’s long been suspected that Koreans are really fast with rhythm games and have high APM because of their language getting to the point faster.
So if I’m reading this right, French (closely followed by English) tends to convey the most info per unit time?
Yes but they also utilize smell.
In Finnish, I can simply ask, “Juoksenneltaisiinko?” whereas in English, I have to say, “Should we run around aimlessly?”
Is that a common question to be asked in Finland?
Depends on how good drugs you’ve got
What produces the stretched graphs like Italian and German? What do these humps mean?
Yeah but 30% of the information in French are the “uhhh’s” lmao
They solved that by not pronouncing half the language.
I am curious about Arabic. I feel like it should be having the highest information rate.
The French/English/German curves are interesting, given the relationships between them.
I wonder if this implies English has more in common with French than German.
Or how the German and Italian curves are so similar, does that reflect a similarity in language or in how it’s used (cultural)?
English is vocabulary wise a neolatin language like french. More than 50% of english words are of latin origin, from roman latin to anglo-norman-french to modern french. English has also lost almost all noun declinations present in german and old english, with the exception being the genitive 's like dog’s tail), and the plural, that takes an -s suffix (apple apples), which makes it similar to french and neolatin languages. So, there is something to it.
Huge amounts of English vocabulary came to us through French. English shares structure with Germanic languages, and retains some vocabulary, but a lot of what remains is considered the “vulgar” term for a thing, while the Romance-root word is the “proper” one. Largely thanks to the Norman conquest if I recall. French was the court language.
If I’m misremembering I’m sure someone will correct me. It’s been 20+ years since I took Latin 😂
I would imagine this is because there is a ‘comfortable’ rate of information exchange in human conversation, and so each given language will be spoken at a pace that achieves this comfortable rate.
So it’s not that the syllable rate coincidentally results in the same information rate, but the opposite - the syllable rate adjusts to match the desired information rate.
Interesting thought.
I’d add it’s probably also that 90%+ of conversation isn’t about “data transfer” in the technical sense, but relationship building. So information volume isn’t usually crucial.
Now let’s see this work done in technical fields, especially change management, maintenance, emergency services, etc, where time is crucial. Those environments tend to have very “coded” language, so we don’t have to say a paragraph whenever we call for a very specific function/tool/action.
I suspect the languages would still have similar curves, but the data rates would increase.
Wollen sie etwa behaupten, die Informationsübermittlungsgeschwindigkeit der deutschen Sprache sei unterdurchschnittlich? So eine Unverschämtheit!