Nate Cull is a user on mastodon.social. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

Alan Kay, 2004

queue.acm.org/detail.cfm?id=10

Our plan and our hope was that the next generation of kids would come along and do something better than Smalltalk around 1984 or so. We all thought that the next level of programming language would be much more strategic and even policy-oriented and would have much more knowledge about what it was trying to do. But a variety of different things conspired together, and that next generation actually didn’t show up.

One could actually argue—as I sometimes do—that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects.

You could think of it as putting a low-pass filter on some of the good ideas from the ’60s and ’70s, as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture..

similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

...
If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.

@natecull @mona, ever the hard-nosed physical scientist, has likened the current state of software engineering to the early years of organic chemistry, when there was no clear notion of how molecules were actually structured, just a mass of empirical information about reactions. (when she's _really_ frustrated she compares software engineering to mediaeval alchemy.) Mona's a bit harsh, perhaps. But it's amazing how little convergence there's been in the field on a unified set of building principles, as there exists in mechanical engineering.

@kara_dreamer @mona That's basically what Alan Kay says

"We had two ideas, really. One of them we got from Lisp: late binding. The other one was the idea of objects. Those gave us something a little bit like the arch, so we were able to make complex, seemingly large structures out of very little material, but I wouldn’t put us much past the engineering of 1,000 years ago."

@natecull @kara_dreamer I've never quite trusted the "object" as a programming entity, myself, but then my idea of a good programming language is Fortran.

@mona @kara_dreamer For me it's Prolog, except a theoretical Prolog that doesn't have all the terrible 1970s misfeatures the actually-existing Prolog has.

But I think we need a lot more transparency and clarity in our programming languages.

Sealed mystery meat libraries that do 'something' to data - or to the entire state of the Internet behind your back - which you don't know and can't prove, give me the willies.

'Encapsulation' has bad side effects.

@natecull @kara_dreamer Indeed. I am not the least bit convinced that the information-hiding that's often cited as a major feature of such languages as C++ and Java is really a feature worth having around.

My lingering mistrust of the object model is not terribly well-reasoned, perhaps. I somewhat dislike scattering "state" in little parcels around the code, rather than keeping it in one place.

@natecull @kara_dreamer @mona It seems to me the goal of "information hiding" is actually to prevent *mutation* of a value in another object. Reading it is fine.

In a language where everything is references, though, that means you have to block access entirely. And then you start hacking around this by implementing getters for everything that return immutable copies...

A million hacks to get around one bad design decision back at the beginning.

@natecull @kara_dreamer @mona @tobascodagama it also (sometimes) allows you to evolve your underlying data model while maintaining the original API contract. Harder to do that with code that reads the underlying fields directly & doesn't expect it to change (end up with denormalized models etc).

Nate Cull @natecull

@tetron @kara_dreamer @mona @tobascodagama Yeah. There's also some very good arguments that we need some form of data hiding for security - crypto keys, etc.

To me an important thing is that we have a way of being able to definitely and formally state in a machine-readable way 'doing method X on object Y produces change Z or preserves property P', which is more than what type signatures can currently give us.

· Web · 0 · 1

@kara_dreamer @mona @tobascodagama @natecull I believe there are formal verification systems that let you do that. The challenge, of course, is that like test cases, complex constraints become programs in their own right that can have mistakes.

@tetron @kara_dreamer @mona @tobascodagama I agree that constraints are programs in their own right.

I tend to think that constraints should be *all* of the program - which is basically the Logic Programming approach. At the very least, constraints should be the *same type* of program as the rest of the program.

Then it's just a matter of 'one part of my program is correct, now I just need to make the rest line up with it'

@tobascodagama @mona @kara_dreamer @tetron

Ie:

There should, at all times, be *at least one piece* of our program that we know to be correct, or given, or required.

These parts that we know (can prove) to be correct should include:

* The correctness of the underlying compiler - I mean if nobody can guarantee that, we shouldn't be running it and it shouldn't have been shipped.

* The interface with the OS

* Our domain knowledge

@tetron @kara_dreamer @mona @tobascodagama

Then, given those three, we *ought* to be able to prove whether our implementation of the rest of the stuff matches what we know about our domain, and is required by the OS.

If we can't prove that our solution is correct, we need to keep iterating it until we can.

I mean, it'll either produce a correct calculation or we can't prove that it can, right? And if we can't prove... iterate.

@tobascodagama @mona @kara_dreamer @tetron This sort of approach is why I think our programming languages need to be flexible enough to specify domain knowledge - and knowledge about interfaces and constraints - directly (in the form of, eg, relations, or predicates).

We should be able to separate the parts of programming which are 'art' from the parts which are 'maths', and let the machines handle the maths.

@kara_dreamer @mona @tobascodagama @natecull The halting problem is the upper bound here, the constraint language has to strictly less powerful than the actual code it is describes, the abstraction is what makes it tractable

@tetron @kara_dreamer @mona @tobascodagama I don't think the halting problem actually is relevant. We give it far too much credit.

The halting problem can be defeated by something as simple as *FOR loops*. That's what they were invented to do.

@tobascodagama @mona @kara_dreamer @tetron Eg ---

If a computer can't prove that a given design is correct, *then a human can't either*.

If you, a human, can't demonstrate that your algorithm produces the correct answer, then *you shouldn't be using it*.

You wouldn't build a bridge that you tried to run the load calculation on and your computer crashed, would you? So why deploy software that does the same?

@tetron @kara_dreamer @mona @tobascodagama So the logical conclusion is that

* we CAN prove the correctness of at least a small, finite subset of programs *

eg '1 + 1 = 2' is provably true, given integer math axioms.

So start from there. Use that small finite subset to prove more.

Somewhere, we've lost the way. We've taken 'but the halting problem!' to be a kind of Goedel sentence on the programmer's mind that stops all thought.