Extracting this from an old long @cjd thread on JavaScript, philosophy, mythology, fascism and other things:

> I suspect that in the future, software, written language and speech will meld into one

β€Œ mastodon.social/@cjd/102668016…
Sounds preposterous on the surface, aren't they expressed in entirely different media and social situations?

Well, yes, kind of, but they do blend together today already, if often imperfectly. I imagine they will blend more seamlessly as time move on.

When are they "one"? I don't know.

How would that look? I'm not from the future, not even a futurist.
Show thread
I just think the supposedly clear delineations today are already breaking down, and the scale from highly contextual, highly subjective, fuzzy spoken language to very precise, explicitly contextualized, bit-encoded, machine-executable language will become an ever smoother scale until you can't clearly say that this form of expression ends here and the next form starts here.
Show thread
Consider these "transgressions", real and some fictional, against the supposedly clear delineation between auditory language, symbolic human language and machine-executable language, in particular "programmed" machine language compiled from human-readable source:

- People actually saying "lol"
- IM written speech
- Sign language
- Literate programming
- Screen readers, voice input
- Natural language processing
- Chat bots
- Wolfram Alpha
- Math
- Lojban
- "programs must be written for people to read, and only incidentally for machines to execute"
- Machine learning
- Schlock Mercenary universe "growpramming"
- DSLs
- Virtual assistants (Alexa/Siri/Cortana/Google Now)
Show thread

@clacke

Who considers there to be a, "supposedly clear delineation between auditory language, symbolic human language and machine-executable language"?

@hhardy01 I think anyone who sees "software, written language and speech will meld into one" and has the immediate reaction "impossible" has this as an assumption, whether they're aware or not.
@hhardy01 I think we'll always have typical examples of each category, but I think the differences will become, and have been becoming through history, less of a qualitative nature and more of a quantitative nature.
@hhardy01 I think the littoral zones between these seemingly discrete forms of expression offer an opportunity for people to unify expression across the forms, that there are benefits to doing so, and that people will somehow do it.

The counterforce is that forms enable efficiency in communication, and there's a reason e.g. math grew a notation rather than being expressed in running sentences. But I think a lot of the differences between domain languages are accidental rather than essential. Nobody sat down and earnestly designed math notation with interoperability and reusability in mind.

It could also be that laypersons would be fine expressing a domain in "universal language spectrum" terms and domain experts would use a language different in nature. In-between today is expert domain language within a host social language, like business jargon or medical terms.
Follow

@clacke

Yes. The fact that natural languages tend to obey Zipf's Law suggests that natural languages evolve through a struggle between the speaker to efficiently speak and the listener to efficiently hear and understand.

These forms are already united in that they are all comprehensible by humans.

But can human cognition ever be modeled and encompassed by any kind of universal Turing machine?

Penrose argues, "No," in The Emperor's New Mind and Shadows of the Mind.

Β· Β· Web Β· 1 Β· 0 Β· 0
@hhardy01 Thanks! I was hoping someone would have had formal thoughts on this decades ago. =)

I'm not talking about humans being mentats or machines being general AIs though, I just think there can be more unification and more of a sliding scale than what we have today.

If I would vocalize to a machine today for it to interpret, I'd use a subset of human spoken language formatted for Turing or maybe it's even sub-Turing, logic to understand.
@hhardy01 Of course @Wolf480pl in the original thread said it better and more concisely than I could here:

> Now, I can imagine a situation where the whole spectrum is being utilized, and you can pick any point on the spectrum independently on whether you're talking to a human or a machine.

Or, if we believe Penrose as you referenced, not quite any point of the spectrum, only most of the points. =)

This is not quite all of what I'm saying here though -- It could be done with today's fragmented forms too.

Also I had too much bubbling in mind and had to spill it, and see, I received two interesting book references for it! =)
Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!