Follow

Very little of what a programmer expresses in a program ever, upon compilation or interpretation, is performed by the computer.

Instead, the computer performs a series of operations conceptually unrelated to most everything the programmer has written, which happen—only through their formulaic arrangement by the compiler/interpreter—to kick off (via the mechanical contrivances of hardware) chains of side effects which the human programmer interprets as performance of the task they programmed.

@beadsland Seems like you could write a similar story about a lot of things? Eg I don't communicate to other people, I just cause air (via the mechanical contrivance of my mouth) to conduct certain waveforms that are conceptually unrelated to the thing I'm trying to convey, and that - only through their formulaic arrangement by the speech centers of my brain - happen to kick off side effects in other people's brains, which leads them to take actions that I interpret as having understood me.

@beadsland Given that one can write such stories about do many things, it seems like this one's existence doesn't say much about programming. Rather, all the stories are pointing to something about the world at large. I agree that it's fun to think about how anything can be viewed on different levels of abstraction, and programming might even be a better than average example of having multiple clean layers to look at. If that was your original point, I'm sorry to have missed it.

@lukesci Casting this in terms of existence, per se, does introduce more nuance.

A solipsistic argument would have it that even if my audience appears to understand English, I can't claim that they have understanding of what I said anyway, because I can't know that they're anything more than automatons, no more capable of understanding my speech than a computer.

But here we're assuming, in arguendo, that the computer understands something, just not necessarily what the programmer understands.

@beadsland In my view, the computer, by doing what the programmer expects consistently, is demonstrating that it understands the same thing the programmer does. Same applies to humans. I don't know what it'd mean to act like you understand sth but not understand it.

This is sort of a "Newton's flaming lazer sword"* perspective, so I know I might be ignoring some nuances.

* "That which cannot be decided by experiment and observation is not worth discussing." Much stronger than Occam's razor.

@lukesci This view is essentially theory of mind -- the counter to the solipsistic answer.

The thing is, theory of mind only applies when we cannot know the state of the "mind" of the other. We must instead posit that mind on the basis of the other's behavior.

But we can know the state of the machine, because a computer is, by definition, a state machine. Thus the debate between theory-of-mind and solipsism is rendered moot. We don't need to posit based on side effects. We can look at state.

@beadsland I'm not positing --- I'm defining. Understanding is as understanding does, regardless of whether the mechanism that achieves that looks to me and my intuition like "understanding" is happening.

@lukesci But our concept of understanding is a positing, it is based on theory of mind, which you just articulated again here.

We associate understanding with a being that behaves as if it understands—not because we have any way to know it understands—but because it is useful to make such an association in order to navigate the world. That is what it means to talk of a theory of mind.

Which gives us theory of mind theory—the theory of how the human animal exercises the positing of other minds.

@lukesci I've given a very explicit example of being perceived to act like you understand but not understanding. People leave when you yell gibberish, which you interpret wrongly to mean they understood you were warning them about danger.

Nonetheless what we can observe in other humans is paltry compared to what we can observe in a state machine. We can only experiment on side effects that we attribute to understanding in humans, thus the requirement for a theory of mind. Not so with computers.

@lukesci We can experiment on the state machine of a computer where we can not experiment on the state of human minds. We can only experiment on the relationship between stimuli and response in the case of humans.

Being able to experiment on the state of a state machine however exposes the bare fact that the state machine does not have any means to represent most of the concepts of higher level programming languages, at least not the state machines currently in manufacture.

@beadsland Sure --- if understanding is a pattern of behavior, one event can look like of that pattern, when the rest of the pattern is not actually there. That we can't observe it with complete accuracy doesn't mean it's indistinguishable from noise.

...and besides, why *do* people find loud noises annoying? I wouldn't be surprised if part of the reason is they're a vague proxy for danger. Maybe now the system doing the understanding is bigger than just the individuals in front of us.

@lukesci My point is that we can observe the state machine of a computer with complete accuracy, and in doing so we can see that higher level languages are, to the computer's state machine, indistinguishable from noise, in a way that they are not for the human programmer.

Touche! Although I would still split hairs here. If I leave cause I find you annoying for making loud noises, the danger I understand is you making loud noises, not whatever danger you sought to convey by making those noises.

@beadsland I guess I'll agree that computer hardware, ne'er written to, memory freshly zeroed, cannot understand e.g. Python. (Try it, it won't work!) But I think people, especially colloquially, normally want to use the word 'computer' to talk about a system of things, including both hardware and software, that does collectively understand Python. If you want to use the word computer not in this sense, please don't let me stop you.

@lukesci Ah, well then, now we're in the territory of Searle's Chinese room and critiques thereof.

The colloquial is a misapplication of a theory of mind. Theory of mind is only appropriate in reference to a black box that responds to stimuli (e.g., a human). It is not appropriate if we can experiment upon the component parts directly.

Colloquial might be sufficient for a naive computer user ill-prepared to perform such experiment, but not for a programmer who can probe the internals directly.

@lukesci What we have is a question of analytical frame.

An end-user of Siri or Alexa can speak colloquially of that system understanding—more or less—spoken language. Presented with a black box, we resort to a theory of mind experimentation upon the stimulus response cycle of the UX.

A programmer of Python, on the other hand, does not have the problem of a black box, and thus cannot speak in the colloquial about Python as they would about, say, MS Word. They can observe the internals.

@lukesci Ah, but we're not talking about the side effects by which language is conducted to the recipient. We're talking about whether the recipient receives, from those side effects, the intended communication.

If when I shout "Fire!" people get up and leave, I may think I've successfully communicated the idea of danger to my audience. But if they speak no English, and only happen to leave because they find me--the shouting foreigner--obnoxious enough to avoid, then I've communicated nothing.

@beadsland I don't know what the computer version of that would be, though, I feel like normally our abstractions are pretty faithful...maybe I think that a shell script that says "for file in $(ls)" iterates over files, because I don't know about its behavior on filenames with spaces?

@lukesci Good example. For starters, a computer has no concept of files, let alone filenames. Only byte addresses that may, as a side effect of doing other things, change their value to reflect a stream of data from a device.

But let's take the concept of "for..in". On an 8-bit machine, this has no direct translation. e.g., It depends on the basic concept of a "block" of code--not a thing in machine code. Likewise, the closest thing machine knows to iteration is "on condition branch n bytes".

@lukesci So to achieve even a primitive while loop, one has to set up a series of operations that result in a condition that the "on condition branch" operand can respond to, and do so in a way that we don't have more bytes of operation than n can accommodate.

If you want that condition to depend on a complex data structure like a list of strings, you've got to do a lot more work, because the machine's language that has absolutely no differentiation of data types, let alone data structures.

@beadsland I definitely agree that the translation from what the programmer writes to what the machine executes is not a direct translation.

@lukesci It's not even close. This isn't the case of translating soup to another human language that can only get so far as "chicken bits in broth" because they have no word for soup.

This is a case of soup being rendered as nominal reference X, which the programmer may optionally associate with the glyphs "s" "o" "u" "p" on a display device at some future time, but for now is just nominal reference X, because the computer has no concept of objects that take up Cartesian space, let alone broth.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!