We like to think of humans as defined by being tool-users and language-users. But while we respect people who create new tools and languages, we don’t prioritize such work, nor have we developed fields that study how to become more efficient at developing, measuring, and improving theoretical tool and language designs and implementations.
There is the idea, in each case, that undirected evolution over time will sort out the best new tools or words or languages, organically producing [successful, widespread] inventions and [popular, widespread] terms that address all significant opportunities for us to become more effective [as tool- or language-users].
I’m not sure where this idea comes from. Three people whose thinking I admire have independently offered a version of this idea as a rationalization for why we the current level of interest in tool- and language-crafting is ‘optimal‘ or ‘sensible’. I think there are quick ways to quantify the extent to which this is not the case.
(As an example of this idea of default optimality: my clever linguist friend last night explained that there is a popular assumption in linguistics that “all living languages are equally good at transmitting all kinds of ideas,” modulo new vocabulary.)
As an example of quantifying what is missing: mathematics & physics in the last century have very actively started creating new collections of axioms, and trying to use them as a language to define what is known about math & the world. If one frames this as language-formation, it was consciously designing a better, more elegant, more expressive language — and designing one that is capable of explaining in simply terms new complex things that we observe or have discovered.
Stephen Wolfram makes the case that there are an enumerable number of different systems of logic (on the order of 50,000), and that we chose one fo these long ago which we’ve built up into moder mathematics and logic, and use to define which srots of theorems or proofs seem ‘elegant’ and ‘simple’ and can be derived quickly from its axioms. He suggests that choosing other systems of logic (and repeating the process of building out an infrastcuture of theorems and propositions) will provide fertile ground for further advances in understanding the universe. What I like most about this argument is the attempt to identify opportunities for understanding which we cannot yet approach conceptually, for lack of language to take us there.
One could do the same by moving backwards in the history of mathematics, trying to describe problems of broad modern interest without concepts and terms developed in the last 200 years. But in that case one could still imagine a single broad highway of ‘increasing sophistication’ along which we progress, adding more nuanced language as we go — when in contrast I feel that here, as in most walks of life, we have made a choice at some point to limit the building blocks of subtle communication, and are filling in the space of ideas that follows naturally from those early assumptions, but are no longer able to see what other building blocks would make possible. In particular, we have no way of estimating gaps in our understanding, or how to reach them.
So the question is: how do we reframe our development of languages outside of math so that we can start improving them consciously, measuring their effectiveness and acknowledging successes that we have discovered in the past through random-walk exploration? How do we merge the valuable properties of different spoken languages; create new auditory or visual languages; develop better sublanguages for effective communication in negotiation, love, large-scale collaboration? How can we use modern tools (wordnik, ngrams) to take control of the language-creation process, identifying trends and demands, and helping visualize new discoveries across all languages?