David Clark's MAX project sounds very similar to the vague idea that's been in my head for ten years.
If you want to understand some of my odd 'language requirements' in the sketches I've been posting here (eg: a system that can represent itself, full runtime reflection, no 'compilation' phase), his design document seems to be coming from a similar place.
I'm more bullish on S-expressions and cons cell as a universal data structure than he is (following Picolisp) but his top-level design priorities are the same as mine:
A system that can combine the functions of RDBMS (storing arbitrary data) and OOP (breaking data into 'spatially separate' content containers, or at least that's how I see it), while being able to change any part of it at runtime (so interpreted, and not statically typed)
These three principles seem fairly fundamental, to me.
A lot of the principles in Max seem to be basically those of Smalltalk! It's kind of sad that each generation has to keep reinventing what OOP actually was, before C++.
Having said that, I've never used a Smalltalk system, and the last time I looked at Squeak I ran away screaming.
But it's the thought that counts, innit?
To me the important insights of Smalltalk are:
* everything is an object implemented in the same language, right down to the OS kernel
* messages can be caught and redirected
* every object can and should be changeable at runtime
What we got instead with C++ was 'objects exist, but only during one process's runtime; they are defined in an offline process called 'compilation' which is not controlled by objects, and they evaporate after the process finishes'.
We then invented COM/CORBA 'components' to provide half of the persistence that in-runtime-memory-only objects lacked - ie, the concept of a 'system registry' and 'classes and interfaces' that can be invoked from any C++-like language - but components don't do *data* persistence.
Surprisingly enough, it turns out storing data is something users like to do! But they can't use objects for it. So we give them filesystems or RDBMSes or email or HTTP calls, all with different syntax and semantics.
The insanity of our current dev model is mind-melting.
Like, in Android, today, you write your application in Java. Objects all the way down. Your entire window state and program state is objects.
Aaaaand then every time the user rotates the screen, your application literally restarts, so you have to manually save and restore your *objects* from a few tiny integers and strings in a registry-like datastore, because hacking up your own ad-hoc object deserialisation protocol is 'safe' and 'fun'.
Oh and did I say 'all of your program state is objects'? Well, yes, but not your program *configuration*, that's a bunch of properties in XML files, because .. .. ..
No of course we couldn't use objects, which are literally *sets of key/value pairs*, to persist and load configurations, which are *sets of key/value pairs*, why would you think we could do something so sensible?
This is not the objects Alan Kay had in mind.
And why do we hamstring objects like this? So we can't save and load them and use them as datastores?
Because they're Java objects, which means compiled objects, which means source code files and strict typing and...
Compilation. Just say no. Not even once.
The bigger problem, though, with the sensible-sounding proposition 'Everything in the system must be an object, no exceptions'...
... is that after nearly 40 years of object oriented programming *we still don't have a consistent definition of what an 'object' is*.
Is it encapsulation? Classes? Late binding? Messages? Inheritance? No matter what definition of 'object' you choose, there will be at least one *major* OOP language in widespread use today that completely violates that definition.
Eg: Even Java and JavaScript are basically Arnold Schwarzenegger and Danny DiVito from 'Twins'. Every design choice one language made, the other did the opposite.
Then there are Perl's objects, which need to be 'blessed' before you can use them safely, which tells you all you need to know to stay well away from Perl.
So I end up thinking, 'well, objects aren't canonically defined, but at least we have other known and agreed structural building blocks':
* Fixed-arity Codd relations! Uh, nope, they don't cope with unstructured data, that's why we use JSON and XML
* JSON! Nope, your keys have to be strings so it can't even be a proper dictionary
* XML! Only you like an insanely baroque data model, so nope
* Functions! good luck representing literal data, and they don't serialise
* Cons cells! They don't cache
.. But out of all these (and many more) cons cells at least are very well known, they provide a lot of arbitrary nested structure for a tiny and very regular model, they let you make functional edits, and maybe the cache locality thing is overblown?
And if we do that one dumb trick of letting one special symbol mark a 'term', we get something like typing in there too.
So.. What if we had an entire OS that just was cons cell structure, down to the network and disk? Ie a modern Lisp Machine?
An immediate difficulty that comes to mind is that - because of that spaghetti cons structure - Lisp Machines tend to have to dump their entire memory space as a monolithic image, instead of breaking it down recursively into files, processes, etc. This doesn't seem like it would scale well.
Can we, then, get to something like an S-exp-y semantics for C-like structs? Maybe even something as dumb as: S-exps, but no dotted pairs, all cells are allocated inline, parens are tag bits?
The idea of recursive namespaces (or pointer spaces) seems very important for scaling the radical flexibility of Lisplike pointer structures even to the modern desktop, let alone the Web. If that's the only idea a future unified computing system took from OOP, it might be enough.
I think Ted Nelson struggled mightily with this very point in his ideas for Xanadu: How to allocate nestable memory addresses for an infinite shared memory space.
He patented his maths, so it didn't help anyone.
Just RAM storage space for the concept of 'personal computer' in my lifetime ranges from 64KB (Commodore 64, big for its era) to 64 GB (a crunchy high-end workstation today). Removeable hard drives are in the terabytes, corporate stores now in the petabytes to exabytes.
Can any one concept scale to address such data sizes? Only recursive namespaces, I think. In practice I guess we use tiers of incompatible cell-addressed storage technologies with different chunk and pointer sizes at each tier.
Another problem with objects is that, possibly by historical accident, they're very much imperative rather than functional or declarative. What this mean is that although we can spin up VM/DNS/RDBMS clusters within minutes that would have made the 1960s NASA and NSA weep - we can't easily make and check *statements about*, or properties of, such systems. At least not in the same languages that we use to command them.
OOP is like a language with nouns and verbs but no prepositions.
And of course we *do* make statements and do logical operations on vast datasets!
But we don't tend to use OOP languages to *make these statements*. We use various 'data languages' like JSON or XML or RDF.
JSON in particular is odd because it's an object oriented format, just with the 'programming' part (function calls) removed. Wouldn't it maybe be useful if we could have objects that could embed 'references to' (as opposed to 'immediately doing something destructive to') other objects?
So this is the level of 'programming language' that I'm interested in at the moment: language in the sense of 'standard convention for describing and making arguments about, in a regular form, almost anything that exists' rather than 'machine into which I put very specific business data and it whirs and spins and out comes other immediately useful data'.
There are plenty of programming languages in the second sense. They are, most of them, not very good at the first sense; very few even try.
@natecull are you familiar with Prolog?
@natecull I want to read this thread ,it looks interesting,but it's vastness is defeating me. Will try again tomorrow.
By this I mean, in the OOP paradigm we can say:
X is a Y
X, do Y, and call the result Z
but we can't say
X stands in Y relation to Z
except by building a complicated *machine* called X that, when given a Y and a Z, spins its gears and gives us some kind of answer that may or may not be correct (but will probably be the right *kind* of thing we requested).
Maybe the very ideas of 'statements' and 'logic' don't work at scale? Yet they seem to work ok for science and literature.