Consider the words on the screen. There are two sources of information.
-
The words, how they’re arranged and such.
-
The meaning that you assign to the words. Meaning drawn from a lifetime of memories.
99% of the information comes from the assigned meaning. So 99% of what’s going on here is you talking to yourself.
https://en.m.wikipedia.org/wiki/Private_language_argument Tangential but still
Language arises out of social behavior, beginning with imitation and reinforcement in early childhood. Children who don’t learn language (by interacting with adults and older children) in the critical period of early childhood, suffer serious developmental problems. So language is fundamentally anti-solipsistic, even anti-individualistic: you only acquire it by being brought into a community of language-users.
And written language begins as an encoding for spoken (or signed) language: everyone learns to speak (or sign) before they learn to read, and learning to read starts with learning associations between written structures and spoken ones. (For English-speakers, that means phonics: the relationship between letters or groups of letters, and sounds.)
Meaning isn’t “assigned” solipsistically; rather it’s “acquired” from a community of use. A single user can’t decide for themselves that “dog” means squirrel. I suspect that if you look at the word “dog” and try to convince yourself that it refers to a bushy-tailed tree-climbing nut-munching rodent, you will be aware that you are doing something very silly, something deliberately contrary to your knowledge.
In the past you received a symbol, and a meaning to go with that symbol.
In the present, you refer to that meaning when you see that symbol.
Yes, there was some kind of deeper communication in the past. The dictionary was written (among other things. Like the vastness of non-symbolic experience. Sensations and such).
But in the present, in the act of “reading”, it’s just you, the dictionary (symbolic and nonsymbolic) and the symbol stream. However you slice it, the dictionary is large and the symbol stream is small.
They aren’t different sources of info, but parts of the same process. And they’re three:
- The utterance. Like you said, the words and how they’re arranged and such.
- Your internalised knowledge. It’s all that bundle of meanings that you associate with each word, plus your ability to parse how those words are arranged.
- The context. It’s what dictates how you’re going to use your internalised knowledge to interpret the utterance; for example, selecting one among many possible meanings.
Without any of those three things, you get 0% of the info. They’re all essential.
So no, it is not solipsistic at all, since it depends on things outside your head (the utterance and the context), and those are shared by multiple individuals.
Look at the semiotic theories stemming from Ferdinand de Saussure over a century ago: he would reverse the relative importance of your 1 and 2, arguing that words derive most of their meaning from their arrangement and interrelationships, and that most of the meaning we see in the world flows from the relationship between signs/words into our perception of their referents.
That’s an interesting way of framing it. But I think it’s more correct to say that our shared understanding of language and definitions of words means we are mostly translating the writer’s thoughts and intentions as best we can from their mind into our minds.
It’s true that no two people will be 100% in sync with their understandings and internal definitions of things, but the overlap will be quite large. So hopefully, actual communication is possible from person A to person B.
However, it’s also possible for people to “talk past each other” as we often see in political discourse. So it’s highly imperfect. But I would not say it’s just a person talking to themselves when they read what another person has posted.
Your source number 2 involves a hard interdisciplinary research problem: “what is meaningful language use?” My grad school thesis was tangentially related to it, so I’m most familiar with it from the AI perspective. Early AI researchers quickly realized that you can’t just dump a dictionary into a computer and suddenly have it understand language. You can add high-level “scripts” to the computer (Schankian scripts), but then it will just be manipulating symbols (chinese room problem). You can tie the symbols to things in the world (symbol grounding) or to its own processing (embodied meaning) but how do you coordinate those symbols with other agents, be they people or machines?
Think about that last question for a moment. Do you have an answer? I don’t think anyone does yet, so whatever you’re thinking is probably a good start towards further reading. @fubo@lemmy.world’s reply points out some of the issues involved, and these issues suggest the problem’s interdisciplinary nature: psychology, sociology, corpus linguistics, philosophy (both analytic i.e. Wittgenstein and Kripke, and continental as suggested by @AbouBenAdhem@lemmy.world’s reply), cognitive science, neurolinguistics, etc. Literary theory is fun too, they say things like: an interpretation is situated, subjective, and performative. OK, sounds great, but how do you turn that into something that a computer does? It turns out that there are a lot of great ideas, but there’s still a lot of work to do to tie it all together. (and unfortunately way too many people think deep neural networks / LLMs can just solve it all by themselves grumble grumble…)
Given the above, to answer your specific question: “Is [meaningful] language mostly solipsistic?” I think most people would say “probably not”, with the caveat that it depends on how you define your terms. Clearly there are very important processes that work only in your own cognitive system, but it seems likely that external factors also play a necessary role.