This is a subtoot to nobody in particular other than the imaginary Gang of Four fan in my head who - like this web commenter from four years ago - keeps saying "but patterns are fuzzy human things, there's no way you could construct a language to specify them..."
Nope! Because we had exactly the same discussion with the idea of "algorithm", except we had that discussion in 1958.
It's just that OOP languages in 2020 - like pre-ALGOL machine languages - are really bad at specifying patterns.
The upside to this idea is that, what if we invented a way (like ALGOL, or like Lisp) of specifying (and computing) 'patterns'?
I think a "computable pattern" could be something like:
* a relationship between a whole bunch of objects (maybe like a Prolog predicate that you could at least test to say 'does this relation hold')
* a macro-like template that could construct a whole bunch of objects
* maybe a sort of meta-object that can contain other objects, but without a hard boundary?
Give it up for Nate, keepin' it real!
I shouldn't have wandered into Twenty Sided Tales because now I have to read it all, again
But actually the idea that "patterns are to OOP as algorithms are to functions" seems like a pretty good and useful idea. Cos what we usually call "functions" in programming are really "algorithms for computing the value of a function at one point"
(I remember reading about, um, I forget the name but it was a 1950s-era interactive graphing calculator app and for it, a function was literally just "a set of input and output numbers". Just a very very different approach to today's.)
@natecull @thegibson The design patterns idea comes from real life architecture, specifically the work of Christopher Alexander, and while a powerful concept I am not sure if design pattern-oriented programming would actually work out.
After all, in physical architecture the basic building blocks are still the physical walls, doors, windows, etc. How would you express architectural blueprints in terms of architectural design patterns?
<< How would you express architectural blueprints in terms of architectural design patterns? >>
That's a very good question, since Alexander's work is rather more... mystical than mathematical, and his pattern "languages" aren't formal languages but rather loose rules of thumb.
But, I would start by expressing blueprints as collections of objects (as they are already in CAD systems) and then mathematical relations between those objects.
And I'd look for some kind of recursive / iterative algorithm for taking a pattern - expressed as a template or macro of some kind - and modifying it with more detail.
I don't see why this would in principle be impossible. It might produce buildings that are nasty in an undefined, gut-feeling-is-wrong kind of way. Or it might be extremely unimaginative and only be able to iterate on well-defined patterns.
Like, naively, one might think that a 'pattern' is roughly like a 'frame statement' in AI. Which is sort of like a kind of proto-OOP.
A knowledge-representation system like Cyc might be able to express some of the relations involved? We've lost the trail of some of that old-school 1970s expert-system and rule-based AI, under the barrage of neural network stuff, is kinda my point. There might be some good stuff there yet to mine.
OOP has objects and templates already. Modern rules engines exist.
I feel like a lot of what you are talking about has been progressing, it's just all boring enterprise development technology that nobody pays much attention to unless they have to for their job.
These ideas *may* have been progressing in various separated experimental systems, but they haven't been successfully integrated back into widely successful languages yet. There are important parts missing. More than important parts, there are important *concepts* required to wire parts together, and discuss that wiring-together, that are missing.
OOP has objects and actions those objects do. It doesn't provide many concepts for describing *relations between* objects.
Ie, if you have a suitably simple and powerful concept of 'relation', at a stroke you don't need custom mechanisms and syntaxes for
since all of these would be instances of 'a relation exists between an object and another object'.
Then, if we had something like 'macros for relations' we could say 'relationX(object,object,object)'
and that would imply whole pages of boilerplate OOP notation.
I think I'm basically arguing for "Prolog, but, with a more uniform syntax including the ability for entire chunks of rule-base to be nested inside single relations, so that it can pre-process stuff and act like a compiler while it runs"
This is probably an extremely embarrassingly trivial problem once someone figures out how to do it (or even that they should), but there's enough weird curly bits in Prolog that this hasn't yet happened. Not even in Kanren, I think.
You might also be excited by Apache Jena, which is part of the Semantic Web stack: https://jena.apache.org
You might also be very interested in clara-rules, which is a rule engine built in Clojure (a dialect of Lisp) http://www.clara-rules.org
Almost all modern rules engine implementations are on the JVM, because that's where the audience for these systems is.
@natecull @thegibson Erlang and Elixir are not JVM languages, and I have been quite impressed with Erlang in the past. I didn't realize until just now that Elixir was a modern Erlang, built atop Erlang similar to how the newer JVM languages (Clojure, Kotlin, Scala, Groovy, etc.) were built atop Java.
Wikipedia lists Erlang as being influenced by Prolog.
Erlang's main claim to fame is that it's designed to never go down, and it's highly concurrent, with is useful.
@natecull @thegibson Apache Jena is fascinating, but like everything else Semantic Web, I'm not sure just how practical it is. The whole Semantic Web is crazy-abstract and kind of a hot mess. The Semantic Web also uses XML as its base format, so *everything* is in XML, which is unpleasant to work with
On the other hand, I think it may be the closest thing to what you're thinking of. Everything is just RDF relationships. Plus, you can plug into the whole Semantic Web
@natecull @thegibson Even if you are anti-JVM, you should look at clara-rules (and Clojure) for inspiration. If you're okay with the JVM, you might find it immediately useful. It's also the easiest system of the bunch I mentioned to learn, especially if you know a Lisp dialect already.
It's a Lisp rules engine, and it's designed for programmers instead of domain experts, unlike many other rules engines. It's also a very practical system, with real world successes.
Basically what I'm looking for is a small and coherent programming *paradigm*, surfaced as a language, rather than an application.
Yes, saying "Semantic Web" is getting warm. But what I've been looking for, around 15 years now, is: what language lets me write a (small, desktop-sized) program, and its data, *as* a semantic web?
My touchstone is the language Inform 7 (and now Dialog). So a Prolog, but a Prolog for the 21st century.
Inform 7 is why I started feeling that something is badly missing from our idea of "programming language". Because I7 is so baroque and weird and does so many things in its syntax that are hard to even *describe* in mainstream languages.
I'm not saying that I7 is a *good* language. I think Dialog is a much cleaner rewrite of what I7 was trying to be. But it's a very exciting language in just how hard it commits to "simulation as literate programming in English".
Enterprise rule engines are... ennh. Really not what I'm interested in, *because they aren't small, coherent paradigms*. They're a bunch of wildly different paradigms all smashed together. That right there - "disjoint paradigm sprawl" - *is* the problem I'm interested in solving. Does that make sense? I want to find the smallest set of concepts that can implement data/configuration/rules/algorithms. "Semantic web" *might* be that, but not in its current form - too large.
Like I'll know how to recognise a sufficiently correct "rule engine" when it's done: when the engine, itself, can be implemented cleanly in itself. Lisp and Prolog can do that. If the engine can't be implemented in itself, then it's not yet correct, it hasn't found the most fundamental conceptual units of the knowledge modelling problem - there are obviously important kinds of logical relationships (the kind that are needed for building software) that it doesn't implement.
@natecull probably because of the fact that there are several different ways of approaching quote-unquote object orientation, and no solution would satisfy all of them :p
@natecull I don't know if I've mentioned my research to you before but it seems like you might find it interesting. It's just that in a way: structural semantic computing, a sort of hypergraph of objects that unifies functionality and abstraction into a sort of generalized engine of thought, i.e. externalizing ideas, their constituent relationships, and their interactions like existing tools (web, printing press) have done for linguistic memory
@natecull I've seen you talk a lot about the lessons we've failed to learn from the bright optimistic and less enterprise early days of OOP, and it seems like you're on the path toward a lot of the same conclusions I've reached. I've been unfortunately pretty unproductive of late but I have a complete runtime and all for the work I mentioned, I just haven't taken the step of applying it and experimenting with it due to some mental health issues. It's good to see someone thinking similar things
@natecull I'm desperately trying to get my life back together, not for my own sake so much as for my research, and having a space for it to grow as a conceptual space outside my own mind and self would be wonderful.
@zens @natecull maybe? I personally don't really map my ideas onto spatial or visual structures and feel that doing so would be reductive of *my* understanding, but other people process information differently so it might well be suitable for them. The main problem is probably that whoever that is would have to
1. care about my work
2. be able to get a cogent explanation out of me in a way that is transferably intelligible
explaining technical or even structural parts isn't so hard
@zens @natecull the problem is that none of that would help explain anything but the application to existing paradigms of software use and human-computer interaction, which I find ultimately uninteresting, and there is nothing worse than constraining creativity/possibility by misleading people through reductive examples. I have no idea how to consistently and transferably explain to people why the *problem space* is so much bigger than they think it is, I don't really care about the existing one
@syntacticsugarglider suppose you got a working demo done tomorrow. who would be the target audience for this? developers? power users? my nana?
That sounds really interesting!
Yes, "semantic computing" seems to describe what I'm thinking about. Basically just trying to capture knowledge as connections between things.
(but if we can do it in a way that's a bit less long-winded than say RDF or JSON-LD, and also if we can chunk all that big messy soup of links into boxes of some kind,, would be nice)
"A collection of one-way links between things" is very close to what an object is, or at least a JSON-ish key-value hashmap.
I think links do need to be one-way because often we only control one end of any conversation. Someone makes an assertion that "X Z-links to Y" (or x.z.y=true in object notation), and that assertion may be wrong but it's still a true fact that they made that assertion. All of science and literature is links like this.
We're so close! Yet so far.
@natecull you don't need to have one opinion on how to embed any of this, the structure of the graph can be implicit in a bunch of disparate query systems that just satisfy some much more minimal abstraction. the biggest possible mistake is trying to be right for all time. plus, the satisfiability of a relation needs to be a programmatic thing for any of the more interesting general conclusions to emerge, which doesn't work with any static data format
@natecull noting that a source of truth and a trusted source of truth are not the same is important, it's good you've recognized that. however, that's not a special case. nothing should be fixed. all the meaningful conclusions I've reached have come from trying to create the maximally general system. I definitionally haven't achieved that, but I think I have some interesting contributions at least
@natecull but basically the way you get the best outcomes is by reducing your axioms as far as possible, basically to the minimal set of fixed touchpoints that make it possible to find a path to interoperation in a programmatic fashion
beyond that choosing things like network protocols, interface paradigms, data formats, etc. are trivially decisions that should never be made final and should never be singular
an ecosystem should be diverse but intercompatible for the best outcomes
@natecull and that's all totally feasible if you take relations/data/functionality all just as lenses into an abstracted effective single computational space of interchangeable objects. you can have countless different approaches to the implementation details coexisting in one space, each exchangeable behind their referential transparency. This is all quite achievable, in fact: I've achieved it. https://github.com/noocene/protocol there's a sore lack of docs but here's a reference impl
@natecull this is just one technical piece of a larger project but it's an especially important abstraction, referential transparency over isolation boundaries **without** fixing data format, transport, the set of interoperable types, etc. is crucial
then you can trivially deal with the problem that actors have incomplete knowledge and different relations just like humans do
build narrow conceptual necks that are analogous to interpersonal communication and everything works together
Hmm. Can you summarise what your key concepts of Noocene's "conceptual neck" are?
Eg I started with the idea that "we all know that objects can model everything" then found that "object" was not a well-defined idea. Then I went to "well relational tables then? Nope" then "well surely RDF triples can model everything" and then came slowly to the conclusion that 'nope, there are non-triple logical relations too" and from there to sort of where I am now: "Prolog terms, basically, but not assuming all of Prolog".
@natecull right, so all of those things can fit into the model I'm working in, because my model is basically just the underlying model, whatever it is, that exists across like... predicate-based embedding of semantics a la logical languages, traditional relational graphs, object models, etc.
There's no data model regardless, it's an emergent system though I'm likely not adequately describing how. Any technical approach just needs consistency, then you get unity across implementations for free
@natecull I guess in practice most of my reference implementation looks like ocap where the capabilities can abstractly be remote services, interoperable untrusted microcontainers running locally or on managed infrastructure, something like a crdt, or any other source of truth/literally anything that satisfies a type
and you build structures organically out of that, mostly by writing typeclasses that convert some things to some other things, those things mainly being other typeclass objects
@natecull this is just one implementation but it's already fully viable because it's not actually any technical decision that is responsible for constraining the scope of what sorts of structures can be embedded
That property is *absolutely crucial*
If anything were to be the technical ethos of this work it would be the statement that "one should not have to get things right the first time or the nth time". Preserving meaning in context of new things removes bounds on creativity
@natecull clarification: none of this is special cased. That's the point of that protocol crate. You can pass in a typeclass object from literally anywhere and it will be perfectly referentially transparent. There are no leaks, and yes it solves the error handling problem of fallibility across machines or other isolation boundaries in a type-safe way. This is a bit like something like Goblins but drastically less opinionated, things like promises (futures) are just another implementation (1/2)
@natecull and it doesn't even have a protocol defined at wire level, you can plug in any existing serialisation format or even define something that works in some sort of weird world where serialisation is no longer a concept. what does that mean? I don't know. But I shouldn't have to. The only thing that's standardized is the sort of vague design patterns of how to break things up into ordered messages in an event channel structure that's pretty universally applicable to interactive comms
@natecull this is the Right Assumption as far as I can tell, because isolation boundaries to the extent I'm interested in erasing them are interactive, so the tool fits the use case to the maximal degree. Anything that isn't isolated or interactive is special and will have a special local implementation, though that implementation might (and likely would) be running in a local micro container and inter operated out of the container via protocol
@natecull at this point I've mostly described what my approach *isn't* and not what it *is*, and that's actually really important. The hard part about this isn't finding some perfect simple model that encapsulates all of human thought, it's recognizing the minimal guiding principles of how ideas are structured and getting out of the way so they can emerge from your system. This isn't a problem you design your way out of, it's a problem you need to solve by reducing assumptions as far as possible
@natecull I don't talk about my work nearly as much as I should, mostly because it's so important to me as to be my principle and almost sole source of meaning in life, to the point where my recent lapse in focus to work on it often makes me wish I could cease to exist
But I've tried to answer some of your questions and give context I think might be relevant, I think getting over fear instilled by spaces that don't value possibilities outside our current hellworld is important
@natecull I'm going to head to sleep now because I probably can't productively say much more without knowing how you will parse and interpret the things I've said so far, so let me know what further questions you have given how little I've really concretely described or precisely outlined
and hopefully I can share something meaningful in a way that retains and conveys the uniquely limitless possibility of all this that makes it so fundamental to who I am
Thanks for writing this and definitely make sure you get some sleep!
I think I still have many questions about exactly what Noocene *is* rather than isn't. You mention "typeclasses" which I think of as a fairly not-well-defined concept, so I'd like to understand more about what your idea of that means. I don't understand what a "protocol crate" or "leaf" is in this model either.
I agree that the important thing in a data model is to be as minimal as possible.
@natecull when i was sp. talking about the existing reference implementation I was referring to Rust's trait objects by "typeclass objects", where that is a vtable-dispatched opaque object that exposes only existential implementation of a trait, which is essentially a Haskell typeclass
the "protocol crate" is the Rust crate titled "protocol", the library that provides the interoperability and erasure I was referring to
and a "leaf" is an abstraction nobody has decided to decompose or reduce
@natecull in terms of your last point, there are two options when it comes to "minimalism" in designing general systems
1. constrain your system in a way that affects modes of expression and confines them to one narrow space, and allow people to adapt. this is very good for encouraging certain types of creativity but not for entire worlds.
2. make your system as small and simple as possible *to get out of the way of extending it*, which is more difficult to do right but leads to better outcomes
@natecull there is no data model here. for something like serialized data at rest, none of this has *any* opinion on how you do that. after all, it's already a well-solved problem that has a lot of case-by-case variance.
the only opinion `protocol` has is the vague sort of way you should structure the operations that are used to transfer something *interactively* over an isolation boundary, and that is still far more technical and granular than any data model you've referred to, different scope
@natecull there is no data model or data structure/format that directly embeds the graph structure of the conceptual space, that is instead emergent/implicit from the way these tools are employed in practice
this leads to the feasibility and, really, inevitability of... higher-order abstractions, where one is more concerned with a *path* through conceptual space than specific acquisition of things satisfying certain queries
and the pieces of the path can be implemented and acquired in diff ways
"there is no data model here."
Um, forgive me, but I don't think it's possible to have *no* data model.
You must have *some* kind of... conceptual model of what your system is trying to achieve? What are the smallest units it breaks a program, or data, into? I'm still struggling to understand even that.
"there is no data model or data structure/format that directly embeds the graph structure of the conceptual space,"
But it's some kind of graph structure, then?
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!