Alan Kay, 2004
http://queue.acm.org/detail.cfm?id=1039523
Our plan and our hope was that the next generation of kids would come along and do something better than Smalltalk around 1984 or so. We all thought that the next level of programming language would be much more strategic and even policy-oriented and would have much more knowledge about what it was trying to do. But a variety of different things conspired together, and that next generation actually didn’t show up.
One could actually argue—as I sometimes do—that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects.
You could think of it as putting a low-pass filter on some of the good ideas from the ’60s and ’70s, as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture..
similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.
So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.
...
If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.
@kara_dreamer @mona That's basically what Alan Kay says
"We had two ideas, really. One of them we got from Lisp: late binding. The other one was the idea of objects. Those gave us something a little bit like the arch, so we were able to make complex, seemingly large structures out of very little material, but I wouldn’t put us much past the engineering of 1,000 years ago."
@mona @kara_dreamer For me it's Prolog, except a theoretical Prolog that doesn't have all the terrible 1970s misfeatures the actually-existing Prolog has.
But I think we need a lot more transparency and clarity in our programming languages.
Sealed mystery meat libraries that do 'something' to data - or to the entire state of the Internet behind your back - which you don't know and can't prove, give me the willies.
'Encapsulation' has bad side effects.
@natecull @kara_dreamer Indeed. I am not the least bit convinced that the information-hiding that's often cited as a major feature of such languages as C++ and Java is really a feature worth having around.
My lingering mistrust of the object model is not terribly well-reasoned, perhaps. I somewhat dislike scattering "state" in little parcels around the code, rather than keeping it in one place.
@mona @kara_dreamer I agree. From a functional or declarative perspective, state *should* be centralised, and put into something like a transaction.
More and more I think of programming as 'capturing knowledge' more tgan describing proesses.. and forced encapsulation feels like a kind of deliberate ignorance.
I understand the intention behind "hiding the irrelevant details", but on a hostile net the details are no longer irrelevant.
@natecull @kara_dreamer Right! completely agreed. Having every little entity carry around a little bit of state is a nightmare when one wishes to preserve _all_ state.
@natecull @kara_dreamer @mona It seems to me the goal of "information hiding" is actually to prevent *mutation* of a value in another object. Reading it is fine.
In a language where everything is references, though, that means you have to block access entirely. And then you start hacking around this by implementing getters for everything that return immutable copies...
A million hacks to get around one bad design decision back at the beginning.
@tobascodagama @kara_dreamer @mona It does seem that immutability is one of those ideas that initially seems all high-falutin' and ivory-tower and useful only to mathematicians, but then turns out to make everything simpler and in fact is everywhere.
Eg corporate records like invoices are all immutable, for good reason.
@natecull @tobascodagama @kara_dreamer @mona
I can go both ways on it, TBH.
While an immutable 'record' of something to go back and refer to is VITAL in certain contexts (particularly legally dangerous ones!), the motivation behind 'information hiding' in state material always seemed, to me, to be the assumption that a thousand monkeys would be thrown at every damned thing (in the form of untrained/untested programmers).
@mona @kara_dreamer @tobascodagama @natecull
Then it ends up being like the "safety" procedures that existed before actual human safety procedures existed -- it was all about protecting the 'product' and 'resources' (the already-written working code, or customer-facing code) from the uneducated fools you'd be hiring en masse.
A conveyor-belt, assembly-line dev process was assumed as the "next step", not 'more powerful languages'.
@natecull @tobascodagama @kara_dreamer @mona
(but just my thoughts -- my awareness of design and history on these elements is limited -- so I may just be farting around in a dark room and hoping it'll hit a light switch here)
@natecull @tobascodagama @kara_dreamer @mona @sydneyfalk choose #rust, and come to terms with `mut Arc<usize>` :p
@amphetamine @natecull @tobascodagama @kara_dreamer @mona
gotta say, I'm quite sold
thank you for the suggestion ^_^
@sydneyfalk @mona @kara_dreamer @tobascodagama Yeah, I'm open to object-like-things having private hidden state, *as long as* that state can be round-trip represented at any time by public state.
Eg you should always be able to run a constructor and get an object with known state, and also serialise that object out in a system-independent transportable form. So that 1) you can reason about the effect of methods, and 2) migrate
@tobascodagama @kara_dreamer @mona @sydneyfalk because if your language doesn't formally provide a way to move objects around and load/restore them, someone's just going to do it anyway by writing a buggy, incomplete interface class, or by saving and restoring the entire Sagan-blessed *virtual computer and OS*, to the tune of gigabytes of storage. Which, coincidentally, is exactly what we all do now.
@kara_dreamer @mona @natecull Mutability is for chumps. It seems like it makes things syntactically simpler, but the trade-off is that it becomes so much harder to know what's actually happening in a non-trivial system.
@kara_dreamer @mona @natecull immutability gets a bad rap because the most common languages are bad at it. Our Java house style defaults to making every variable final unless it absolutely cannot be final. But that means a ton of extra keystrokes, and you can't use final with fields, where it's *most* useful, unless the values you initialise with are known at instantiation. And for lots of types, collections most notably, "final" is not immutable anyway.
@tobascodagama @kara_dreamer @mona I would love to know just what would be the minimal amount of mutability a language/OS core could get away with.
The OS loader and garbage collector, I guess...
But if we then had, eg, FRP over immutable lazy lists or something, could we get a really tiny mutable kernel, and then a fully immutable pure-functional IO extensible language on top? Something that would even be safe to run code straight off the Net?
@tobascodagama @kara_dreamer @mona @natecull Something I've found weirdest when coming from ML/F# to Rust is that Rust values are immutable by default, but Rust is actually very mutation-y and it doesn't feel wrong.
Mutation isn't the problem, mutable aliasing is the problem. Rust strongly constrains aliasing behavior. Immutability solves the problem differently. Either way, one object can't sneakily change another under the hood.
@icefox @natecull @mona @kara_dreamer @tobascodagama EMBRACING mutation are you‽ They'll have you stripped of your golden λ and ejected from the Functional Fox Forum if they hear!
@Azure @tobascodagama @kara_dreamer @mona @natecull I know it's risky. But it's fine, really! I have it under control... it's mutable aliasing that's the problem, and if you control the aliasing you can mutate to your heart's content! Right?
...right?
@icefox @natecull @mona @kara_dreamer @tobascodagama Oh, that reminds me.
HERE! Have a Lisp that is NOT BASED ON THE LAMBDA CALCULUS: https://web.cs.wpi.edu/~jshutt/kernel.html
Staaare at it.
Feeeeel it.
Drink it in.
Let it your mind consume it and vice versa.
@Azure @tobascodagama @kara_dreamer @mona @natecull Oh, now THAT'S interesting. If bewildering.
I feel like it's a shadow of lazy evaluation, or vice versa, but that's just a hunch.
@icefox @natecull @mona @kara_dreamer @tobascodagama it's macros, rehabilitated. There are arguments that a lazy language can do everything macros can do which is kiiiinda' true but not exactly.
@Azure @icefox @mona @kara_dreamer @tobascodagama Yeah, John Schutt is one of my heroes.
I don't understand *quite* how his Vau-Calculus proof works (I don't understand any of it at all) but I feel that rehabilitating FEXPRS must be trivially true.
Eg: it seems obvious to me that a macro *just is* a function over: 1, the dynamic environment, and 2, the program text it encloses. I mean, what else could it be?
@tobascodagama @kara_dreamer @mona @icefox @Azure What's held everything up, I think, is that there was a silly and wrong 'no-go theorem' published circa 1980 which he's now, laboriously, deconstructed to prove its falsity.
I say 'silly' because again, how is it not *obvious* that a macro is a function from environment + text to another function?
So a paper saying 'FEXPRS have no equality theory' *must* have been false.
@natecull @tobascodagama @kara_dreamer @mona @icefox The original FEXPR paper wasn't /false/ it just applied only to actual, factual lambda calculus. Schutt deconstructed it by splitting the lambda and making a new formalism.
SPLITTING THE LAMBDA! λ ⚛ 💥
@icefox @mona @kara_dreamer @tobascodagama @natecull And is only the lambda calculus because Church's publisher couldn't do Underlining.
Really.
Imagine a world where it's just called the _calculus_.
@Azure @icefox @mona @kara_dreamer @tobascodagama Weirdest thing of all is that lambda calculus was invented to do logic, but *nobody uses it for that*. Mainly because it didn't solve the original problem it was invented for, I think. So it was just sitting in the junk drawer.
I think there was a long time between lambdas being borrowed for Lisp and the calculus actually being proven 'good enough' in some respect.
@natecull @icefox @mona @kara_dreamer @tobascodagama Well, Church, Lord of the Lambda, and Turing, Archmage of Automata, proved it 'Good Enough' before anyone had a computer to compute with.
@Azure @icefox @mona @kara_dreamer @tobascodagama
Then of course there's Combinatorial Logic and SKI Calculus, which is equivalent in power to Lambda Calculus but doesn't use variables. The functional stack languages Joy and Factor are based on it. (Eg Lisp crossed with Forth).
CL was also invented to do logic, and not used for that, which is kinda annoying.
@tobascodagama @kara_dreamer @mona @icefox @Azure Thing about Lisp is that it's actually not purely based on Lambda Calculus. It's Pair-based Lists plus Symbols plus Lambdas. All three of which have subtly different semantics.
For example, you might naively think that a Symbol is a function of no arguments that returns a value, and you'd be right. But Lisp doesn't treat a Symbol like that.
@Azure @icefox @mona @kara_dreamer @tobascodagama And part of the reason why lambda calculus doesn't really grok 'environments', and why Schutt had to invent his own calculus that does, is because of that Lambda vs Symbol divergence. Environments belong to the world of Symbols (though also part of the definition of Lambdas), but they're 'things that exist in the mind of the mathematician' rather than objects in the calculus.
@tobascodagama @kara_dreamer @mona @icefox @Azure And it's a bit the same with Type Theory, which I guess comes similarly from the mind of Church. Since he and others were working in the days before actual computers, where 'computing' meant a person with a pencil and paper, again a 'type' is a not very clearly defined notion, and it's an abstract thing in the mind of the mathematician, and not an actual object on the page.
@natecull @tobascodagama @kara_dreamer @mona @icefox Pair based lisps are at least a canonical thing in Church representation.
Symbols are a bit of a hack, yes.
@Azure @tobascodagama @kara_dreamer @mona @icefox 'Splitting the lambda' yes.
Prolog accomplished something similar in 1972 - if you stare long enough at how Prolog variable binding works, not only will you go blind, but you come to realise that lambda calculus sort of munges several steps together (declaring a variable, and binding it to an argument) that don't necessarily have to go together.
@Azure @icefox @mona @kara_dreamer @tobascodagama (It probably wasn't obvious, I guess, because *environments aren't first-class objects in lambda calculus*, so 'a function that takes an environment as a parameter' is difficult to express in a formalism which doesn't have an expression for that.
We kinda have a bit of a cargo-cult approach to lambda calculus, I think. It was really just a hack that was whipped up quickly.)
@natecull @kara_dreamer @mona @tobascodagama it also (sometimes) allows you to evolve your underlying data model while maintaining the original API contract. Harder to do that with code that reads the underlying fields directly & doesn't expect it to change (end up with denormalized models etc).
@tetron @kara_dreamer @mona @tobascodagama Yeah. There's also some very good arguments that we need some form of data hiding for security - crypto keys, etc.
To me an important thing is that we have a way of being able to definitely and formally state in a machine-readable way 'doing method X on object Y produces change Z or preserves property P', which is more than what type signatures can currently give us.
@kara_dreamer @mona @tobascodagama @natecull I believe there are formal verification systems that let you do that. The challenge, of course, is that like test cases, complex constraints become programs in their own right that can have mistakes.
@tetron @kara_dreamer @mona @tobascodagama I agree that constraints are programs in their own right.
I tend to think that constraints should be *all* of the program - which is basically the Logic Programming approach. At the very least, constraints should be the *same type* of program as the rest of the program.
Then it's just a matter of 'one part of my program is correct, now I just need to make the rest line up with it'
@tobascodagama @mona @kara_dreamer @tetron
Ie:
There should, at all times, be *at least one piece* of our program that we know to be correct, or given, or required.
These parts that we know (can prove) to be correct should include:
* The correctness of the underlying compiler - I mean if nobody can guarantee that, we shouldn't be running it and it shouldn't have been shipped.
* The interface with the OS
* Our domain knowledge
@tetron @kara_dreamer @mona @tobascodagama
Then, given those three, we *ought* to be able to prove whether our implementation of the rest of the stuff matches what we know about our domain, and is required by the OS.
If we can't prove that our solution is correct, we need to keep iterating it until we can.
I mean, it'll either produce a correct calculation or we can't prove that it can, right? And if we can't prove... iterate.
@tobascodagama @mona @kara_dreamer @tetron This sort of approach is why I think our programming languages need to be flexible enough to specify domain knowledge - and knowledge about interfaces and constraints - directly (in the form of, eg, relations, or predicates).
We should be able to separate the parts of programming which are 'art' from the parts which are 'maths', and let the machines handle the maths.
@kara_dreamer @mona @tobascodagama @natecull The halting problem is the upper bound here, the constraint language has to strictly less powerful than the actual code it is describes, the abstraction is what makes it tractable
@tetron @kara_dreamer @mona @tobascodagama I don't think the halting problem actually is relevant. We give it far too much credit.
The halting problem can be defeated by something as simple as *FOR loops*. That's what they were invented to do.
@tobascodagama @mona @kara_dreamer @tetron Eg ---
If a computer can't prove that a given design is correct, *then a human can't either*.
If you, a human, can't demonstrate that your algorithm produces the correct answer, then *you shouldn't be using it*.
You wouldn't build a bridge that you tried to run the load calculation on and your computer crashed, would you? So why deploy software that does the same?
@tetron @kara_dreamer @mona @tobascodagama So the logical conclusion is that
* we CAN prove the correctness of at least a small, finite subset of programs *
eg '1 + 1 = 2' is provably true, given integer math axioms.
So start from there. Use that small finite subset to prove more.
Somewhere, we've lost the way. We've taken 'but the halting problem!' to be a kind of Goedel sentence on the programmer's mind that stops all thought.
@natecull @kara_dreamer @mona I have some clue about the general idea of Prolog, but no real familiarity with it. What are some of the misfeatures it has?
@duck57 @mona @kara_dreamer First, if you want to play with SWI Prolog (kinda the standard open-source implementation) there's a really neat webby version here: https://swish.swi-prolog.org/
I love that Prolog has a built-in database, so it's less of a 'language' and more of a 'place to put data and then rules about how to generate more data'.
but the 1970s-isms are a bit like FORTRAN:
* Terrible string handling
* Everything's one big top-level database
@kara_dreamer @mona @duck57 also
* It's supposedly declarative/functional but it's actually imperative. You can change the database at runtime (have to, to do serious work) but you can't, easily, represent the database as data itself. You sorta can, but you have to hack it up yourself
* It got higher-order functions kinda late in the day and weirdly. It's not lexically scoped but dynamically scoped, and HOFs pass the*name of the function not the function itself.
@duck57 @mona @kara_dreamer Prolog's spiritual successor is probably Kanren ( http://minikanren.org/) but that's less of a language and more a library, kinda heavily dependent on whatever language it's implemented in.
@natecull @kara_dreamer I've never quite trusted the "object" as a programming entity, myself, but then my idea of a good programming language is Fortran.