The Object-Oriented Programming that can be implemented is not the *true* OOP.

True UML has never been tried,

Rational Rose can never fail, it can only *be* failed,

The Gang of Four were alone responsible for the downfall of the Patterns Movement,

etc

Show thread

@natecull Well, in the other thread where I mentioned "true" OOP. I was mostly referring to this train of thought: wiki.c2.com/?AlanKaysDefinitio

Almost all popular OOP languages today fail on even the first item: everything is an object. Ruby is a notable exception (and Smalltalk, though it's not particularly popular).

@xnx38h @natecull In Python 2, the print statement wasn't an object. I don't have a go-to example in Python 3 (and maybe they have eliminated non-objects) but I don't think this is the case.

Python is a fundamentally conservative and restrictive language. Much like Java, you can look at Python code and know right away that it's Python code. This isn't the case with languages like Ruby.

@urusan @natecull python 3 is kind of a fundamentally different language than python 2. python 2 had two types of classes, etc

python 3 has syntax elements that aren't objects (which is also true of smalltalk) but other than that, there isn't really anything that isn't an object

@xnx38h @natecull Hmm, investigating it further, it looks like even Ruby has a (very small) stable of non-objects, such as the "if" control statement.

@urusan @natecull yeah well there's gotta be *some* syntax, smalltalk syntax is just extremely minimal. but you can't call the ^ (return) in smalltalk an object either

@xnx38h @natecull While technically true, it can be taken so much further. Look at Lisp for instance.

@urusan @xnx38h

A control construct like 'if' could definitely be an object as long as you had a syntax/semantics for passing in unevaluated (or partially evaluated code blocks) and environments.

This is why I'm interested in John Schutt's 'vau calculus' which is basically Scheme reinvented with 'macro' as the basis rather than 'function' (not actually macros, more just 'function which receives its caller's environment and a list of unevaluated arguments)'.

I reckon OOP could do the same.

@natecull @urusan @xnx38h Fexprs were around in Lisp long before macros were. The problem is balancing expressive power with understandability (for humans and for programs, i.e. compilers). In the end, unfolding a macro definition at compile time into a pre-established set of special forms is a lot easier than predicting what a fexpr will do at runtime.

(You can build fexprs out of macros relatively easily if you really need to, something like (macro fexpr-apply (f &rest args) `(,f 'args)).)

@Vierkantor @urusan @xnx38h

"In the end, unfolding a macro definition at compile time into a pre-established set of special forms is a lot easier than predicting what a fexpr will do at runtime."

I believe constructing an argument against this premise (that fexprs are hard to predict) is what John Schutt's PhD was all about.

Admittedly I don't have the ability to parse exactly what vau-calculus proves (though it's basically lambda calculus with a model of 'program text', I think).

@Vierkantor @urusan @xnx38h

This is his PhD here:

web.wpi.edu/Pubs/ETD/Available

I wish I could understand what it lets us prove about fexprs, because I think fexprs (with suitable pre-evaluation of them) are a much more tractable basis for macros than anything else.

They have to be *lexically scoped*, of course, which Lisp's original fexprs weren't.

@natecull @urusan @xnx38h I understand from scrolling through that thesis before, that enforcing lexical scoping is one ingredient, but also other things need to be forbidden (or strongly discouraged), such as updating bindings outside of the current scope, procedural macros (although: just use a fexpr) and quasiquotation.

In my own hobby Lisp implementation, the choice to avoid fexpr-like constructs is that they require a lot more analysis in the compiler. Not impossible, just more work.

@natecull @urusan @xnx38h Also, being able to let-bind (explicitly) dynamic variables is really useful (just ask Haskell people about the Reader monad). Combine it with some form of continuations and you get algebraic effects. Combine it with `$vau' and your reasoning ability about the code gets wrecked.

Follow

@Vierkantor @urusan @xnx38h

I am afraid to ask Haskell people about Monads because I feel that they will ask me to attend a service

· · Web · 1 · 1 · 0

@natecull @urusan @xnx38h Right. I'm one of those people for whom "a monad is just a monoid in the category of endofunctors, what's the problem?" is the actual way I think about monads.

@Vierkantor @urusan @xnx38h

I had a monad in my endofunctor once, it was very painful

@Vierkantor @urusan @xnx38h

One thing that I don't understand about Schutt's Kernel language, is why he doesn't use a syntactic mechanism for fexpr evaluation. It seems to me that that would solve some problems.

Eg, since he loves '$' as a pseudosyntax to mark fexprs: why not make $ be special syntax, eg:

(foo bar) applies function foo but

($ foo bar) applies fexpr foo (silently inserting the caller's environment as the first parameter)

But my brain is not big enough to argue why.

@natecull @urusan @xnx38h I think your `$' operator is called `unwrap' in Kernel. Basically you can pretend that each applicative (read: function) is actually of the form `(wrap $f)', where `wrap' is a primitive applicative, which has the effect of evaluating the arguments before passing them to `$f'. And `(unwrap (wrap $f))' is equivalent to just `$f'. Presumably in the actual implementation, for some definitions the `wrap'ped version is primitive and for some the `unwrap'ped version is.

@Vierkantor @urusan @xnx38h

Yeah, I don't get why he does that wrap/unwrap business at all. Makes no sense to me. Doesn't seem the simplest or most useful route.

Using $ as syntax to explicitly mark the evaluation of a fexpr would I think give both the compiler AND the human programmer much more targeted information that Here Be Dragons.

@Vierkantor @urusan @xnx38h

like if you see $? Then you know that be careful, everything following is HIGHLY dependent on exactly what operative follows. This is a macro. Treat nothing as if it's evaluated, or if evaluated, as if it's evaluated in the current scope. Also, be aware that you're giving that operative full read access to your current environment. Security risk.

But sometimes you really really need expressions which are quotes, or which aren't evaluated in the current scope.

@natecull @urusan @xnx38h Right, and you can implement the `$' as a macro (assuming `quote' exists), rendering the whole exercise pointless. Presumably that is not the definitive reason why it's not an operator, but it sure would motivate me if it were my thesis :-)

@Vierkantor @urusan @xnx38h

I mean the point of the exercise is that it's a language built from the ground up, it is not built over an existing macro facility. It assumes that there is no built-in macro facility. To me that's a plus because the semantics of all existing macro facilities are weird and horrid and so I don't want them in my base language.

@Vierkantor @urusan @xnx38h

ie, in a Scheme-like language with my $ operator as syntax, there would be no other way of defining macros. If you see an expression not starting with $, you *know* that it is a function evaluation.

This would not be true of all existing Schemes, unless one forcibly removed/disabled the built-in macro facilities.

@natecull @urusan @xnx38h One claim that I don't have time to go into tonight: macros are fine, it's quote/unquote that needs to be fixed. (I believe Racket understands this.)

More specifically, 'a should not be a pure computation returning a symbol with name "a", but an effectful computation that creates a new symbol object also carrying the current binding of "a".

@natecull @urusan @xnx38h For example:

(define x 1) (macro foo (y) `(+ x ,y))) (let (x 2) (foo x))

When expanding (foo 3), we evaluate `(+ x ,y). Because we are in the scope of `foo', we remember that `x' refers to the `x' of (define x 1), not `(let (x 2))'. And the opposite for the `x' we substitute in for `,y'.

@Vierkantor @urusan @xnx38h

Hmm. Leaning hard into the Lisp idea of 'symbol' and letting them be fully unique objects linked to their environment and which can't be recreated at all from their printed string name is interesting, but goes against my preference for symbols to be just plain text atoms so that what you print is exactly what you get.

It's philosophical choice but I prefer the Prolog 'atom' to the the Lisp 'symbol'. It's also why I don't like Ratchet's 'syntax' objects.

@Vierkantor @urusan @xnx38h

Basically I'm coming from the angle that if you don't have a clearly defined 100% correct fully-roundtrippable serialization for every in-RAM object, then you don't have a language, or at least not one that's useful for describing all of your computational objects.

At some point you're gonna have to export your objects out of RAM to another system, and when you do, you're going to want a serialization syntax. It might as well be the same language that created them.

@Vierkantor @urusan @xnx38h

But most programming languages today are very happy to be one-way languages, which only feed data into RAM, which then becomes a black box out of which nothing (function- or type-shaped) ever emerges.

Which is fine. Until it's not fine.

@Vierkantor @urusan @xnx38h

But symbols (or any other arbitrary object, including say integers) being marked up with extra 'invisible' data that's provided by their lexical scope or computational history, is still a valid idea. It really needs a GUI to really explore well, and that's what gives me pause. There would need to be some clearly standardised way of exporting that invisible data in some visible way, imo. Otherwise it's a big data-loss black hole.

@natecull @urusan @xnx38h More seriously, it seems that the main motivation is to make metaprogramming first-class and orthogonal. For first class, the thesis makes a comparison to macros which must appear named at compile-time, while you can decide at run-time which fexpr to apply.

For orthogonality, they compare "the head gets evaluated, then applied to the tail" with "if the head is not one of these special forms, each element of the list is evaluated, then the head is applied to the tail".

@Vierkantor @urusan @xnx38h

Right. Removing the idea of 'compile time' as a special privileged space is important to me, because on the Internet, there is no such thing as a generalised 'compile time'. Everything happens at runtime.

So we need a useful way of thinking about 'compilation-like' processes as things we can do at runtime, over runtime-varying data.

@Vierkantor @urusan @xnx38h

I have absolutely no idea what Schutt was thinking of by allowing this. It doesn't make any sense to me in how I think of fexprs. Why should applicatives (functions) have *any* access to the dynamic environment????? Seriously why???

@Vierkantor @urusan @xnx38h

Again, this is why I'd really really separate out $ as a top-level syntactic thing.

But, I don't have the big brain calculus tools for explaining why this would help. It just seems the obvious right thing to do, to me. Maybe it's obvious yet wrong.

@natecull @urusan @xnx38h Doesn't this have to do with the way to share variables between calls? Like in the $define! count example on page 94, if $let has its typical definition in terms of $lambda.

($let ((self (get-current-environment))) ($lambda () ($set! self counter (+ counter 1)) counter))))

(I somehow have a strong urge to label this code block as "Statements dreamed up by the utterly Deranged" on one of those "Stop doing PL Theory" memes, as in mastodon.vierkantor.com/@Vierk)

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!