I have this emerging model of an alternative social computing metaphor, or goal, in my head. It’s still very tentative and prototype-like, but I thought I’d get it out here while I’m still turning it around in my head

It’s mostly formed from Seth Schoen and Cory @doctorow ‘s idea of adversarial interoperability, Joanna @rootkovska’s Qubes, and @mntmn’s Interim and Reform projects

Show thread

with also (in a way that I haven’t been able to entirely compose together) CHERI cl.cam.ac.uk/research/security , and @cwebber ’s work on Mark-Miller-like capabilities.

Show thread

So most of our radical ideas about empowerment through digital technology so far have benefited from the general uplift of the last few decades of Moore’s Law and the spread of near-ubiquitous general purpose computing

Show thread

The energy of that propagation has ebbed in the last decade, and has actively begun to work against those ideals (I have only an intuitive idea of those ideals so don’t press me on that — assume for now they’re your most favourite and inspiring parts of the digital environment)

Show thread

We have giant abstraction stacks of mostly frozen complexity, with only a slowly moving froth of innovation on top. And it’s mostly proceeding in a direction of increasing complexity, at the behest of the groups whose interests are aligned with the conservativeness of the underlying stack

Show thread

Sorry, that sounds conspiratorial and I didn’t mean it to. What I mean is that a successful tool or service builds on the browser, or Android/POSIX say, or inside Twitter or Facebook’s ecosystem. Those environments are rich and complex and provide coherent and comprehensible abstractions for doing almost everything that their creators and sponsors want to do (and would like you to do) in the world

Show thread

Any attempt to provide an alternative vision has always had to be able to offer that alternative while not re-inventing too much of the existing stack. Windows pushes DOS out of the way, but only because they were both balanced on top of the IBM PC

Show thread
Follow

GNU/Linux could build those same IBM PCs but built its new abstraction stack on Unix, the web browser grew up inside the 90s desktop GUI, etc. But you can’t uproot too much of the stack! Some of it is permanently caked-in, too many floors down.

This is the pathos of communities like Lisp, or Plan 9, or Xanadu or Smalltalk :: to even prove they could succeed, they would need to dig down to deep, and then recreate everything above them — it quickly becomes too much work on both the computing and the cultural sides

Show thread

In earlier years, we were lobbying and picking and working on the abstraction towers we hoped would lead to a better world, but now it feels like those directions have been buried by the buildings built above them: you can think of this as co-option, but another way may be a narrowing of options after a period of abstractions that tended to general innovation - a post-Cambrian winnowing? Sorry for all the metaphors I’m still trying to name and frame this

Show thread

I should speed up, this is all stuff you know so far. So the choices you have, I guess is to create an alternative top-of-the-stack or *emulate* bits of the stack below (there’s some thinking to be had here about the reasons why one might want to emulate, and its failure states but moving on...)

Show thread

I don't know the name for this act, but adversarial interop will do for now. You wire yourself up to the existing abstraction framework, and pull it in a new direction. But you only do that to the degree that the abstraction fails to be able to stop you, and to the degree that you can comprehend what the abstraction presents

Show thread

And we're now at a point where the abstraction tower is entrenched in ways that are difficult to circumvent, and we can't dig back down to the strata where we had some other directions we could go. (The image in my head here is when your game of Tetris is going *very badly*)

Show thread

What I propose (finally!) is a project which is more dedicated to allowing us to dig deeper, and uproot more, rather than alternatives on the top of the present, or trying to roll back in time to a more pliant, simpler, stack.

Show thread

Virtual Machines are a good present-day example of this approach: it shows the possibilities, and the challenges. Essentially, VMs enguls almost all of the software stack of entire operating systems. Something like Qubes run them within its (fairly thin) hypervisory world, where it can wrap them in its own model.

Show thread

VMs have to do a little bit of adversarial interop -- they don't just wrap around the code, they push past its boundaries a little too. They dig a bit deeper into the old abstraction. They'll emulate the machine, but they'll also work out details about individual applications, and bring that to the surface.

We avoid doing this a great deal, because, frankly, it's /really difficult/ and /extremely fragile/. Abstractions have interfaces, and those interfaces are stable.

Show thread

Everything else is what the abstraction is *meant to be hiding*.

But we are at a moment where *have* to dig deeper -- and perhaps all this computational power can assist us.

We have an abstraction in which *what we want* -- data, patterns, power, is buried deep deep below us -- and we are trying to build tottering, tall, highly experimental alternatives of our own.

Show thread

(This, you might notice, is also the challenge of alternative political systems. Socialism is tempting -- but you have to build it on capitalism. Anarchism looks promising, but you have to erase millennia of power relationships.)

Show thread

But! Perhaps we can now use the complexity-wrangling tools of computation to dig deeper into those abstraction stacks, and comprehend what we need?

Show thread

To be specific and concrete for once in this thread: Facebook's abstraction interface is its webpages. Parsing those to extract the data and services it gives us in a we can use in our alternative software is a painful, fragile, act.

So what could we use to make it less fragile?

Show thread

I can run an existing app on my new computing environment, because I can -- given our current powers -- emulate it precisely. But how can my new computing environment *understand* what the app is doing, saying, processing?

It needs to be able to dig deeper: Rather than a bunch of syscalls, we need to be able to recognise certain computations -- turn the emulator's understanding of file and network operations into an understanding of photograph collections, contact updating, calendar sync

Show thread

What if we saw our existing software not as the sources of our limits or invincible black boxes, but objects to be torn apart, interrogated -- like we were archaeologists, or newly free people running through the presidential palace.

Show thread

In these situations, we can actually build *our* abstraction tower from much lower down. I run an alternative user-interface, an alternative file-system, an alternative *social computing context*, and throw down heavy costly-to-construct forms from windows in towers high above it.

Show thread

In many ways, this is always the overambitious goal of any reverse-engineer, or someone re-writing an existing system.

But, if we were to view this not as a doomed pursuit, but a topic of active research and activism, using modern tools, how deep down could we dive? How large, complex, universal and interconnected are the things we could dredge from the current system?

Show thread

I ask this, because of its similarity to what modern applied commercial and academic computing is working on. They too, are faced with understanding a large, ever-changing, overcomplex and non-compliant environment, and recognising patterns with it. And then turning those patterns into large-scale understanding.

Only in this case, they are applying it to better understand and therefore us.

And in my case, we are applying it to understand and control our own digital environment.

Show thread

Is it an untractable problem? Well, it has been so far, which is why everyone gives up. But maybe it's tractable now, if we took that as the task ahead?

Unknown! But if we did that -- what would it let us do?

Show thread

Well, imagine what's happening with RedoxOS or even something as extreme as Interim or GNU Mes. If you start at the very base of the device abstraction *and* obtain complex objects at the very top of that stack, you might be construct a consistent, high-level OS that skips a lot of the construction work in-between.

Show thread
Show more

@mala Some general thoughts:

- Selling the notion that existing infotech stacks are excessively complicated isn't a hard sell. Don't linger on that long.

- Defining what specific _elements_ of that stack you're concerned with, somewhat more so. Are you tackling the whole shebang, or only parts?

- "Social computing" strikes me as a vague or poorly-known term, something of a shibboleth or blind reading, filled with meaning supplied by the reader, but not necessarily the same by all.

1/

@mala You might want to use a more concrete term, define what you mean, or link a standard definition.

- How your inspirations (Seth Schoen, Qubes, Interim/Reform) fit in here isn't clear. Spend more time on that (or find better examples?).

- Defining a goal, and sorting How to Get There From Here is a good practice. So I'm on board with that. Might specify the goal a bit more clearly.

Defining what you want to be able to do (or you want the computing environment to provide), for ...

2/

@dredmorbius Doc thank you for your advice, but I think you're confused about what I'm doing here. I'm not explaining, I'm speaking my thoughts out loud to see if anyone is already on the same wavelength.

@mala And I'm telling you that yes, there are people on this wavelength (or at least similar ones, based on uncertainty of message).

The ideas are worth developing. Developing and expressing them more clearly would help.

The stack _is_ too complex and will likely collapse (I'm getting to that in my main thread).

But yes: good ideas, shared concerns, and probably useful.

@mala ... whom, and on what type(s) of platforms (inputs, UI/UX/ performance specs, energy, bandwidth, latency, etc.)

Some years back I started thinking technology goes through a set of stages, from idea, to proof of concept, to early and mature development. At some point it hits a rococo phase -- highly embellished, highly complex.

Which creates a crisis.

There's usually either a total collapse or a what I call a "recapitulation".

With the first, the tech just dies -- too complex ...

3/

@mala ... and often overtaken (Christensenian sense) by more capable and cheaper alternatives.

The recapitulation is in the spirit of Saint-Exupery: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."

We get systems which are overly complex, and they are replaced with a return to the simple life. MSIE scrapped for Chrome, proprietary Unix for Linux. Several rounds of this with Netscape & Mozilla's browsers.

Simplify.

4/

@mala A year or so back I traced the origins of the phrase "complexity is the enemy".

I'd previously determined it was actually "complexity is the enemy *of reliability*", from an Economist article in early 1958. It was commenting on a report of British industrialisation (and safety as I recall), and general findings. (I've got the study ... somewhere.)

But the notion and its seniority seem highly significant. Complexity is ultimately a trap.

Either we break out of it or it breaks us.

5/

Show more

@mala "turn the emulator's understanding of file and network operations into an understanding of photograph collections, contact updating, calendar sync"

This sounds an awful lot like document, relationship, task, and process awareness.

Which many have tackled.

(I'm... attemtping a few of these myself.)

Do we really have a solid generic underlying model for these yet?

Also: proprietary software increasingly silos such data away from other apps (and the user) on systems, making interop hard.

@mala

if i'm getting this right,
sounds kinda like you're talking about taking the concept of an "API" for a program or library and applying it to all computing?

where one goal is to "syndicate" data inside old systems that won't cooperate through various kinds of "bridge" tools that expose more free and flexible "APIs" to new systems

@mala

while reading this one of my main thoughts is,
"wow, sounds a bit like a very abstracted complaint that proprietary social media sites won't use a standard protocol like activitypub and it's hard to bridge them to it because the owners want control to be the only ones scraping the data"

@mala

but my second main thought is about the software i use

1. no more wordpress
2. sandstorm.io

@mala

i used to use wordpress but after a while i got So tired of mysql and its unmaintainability that i was like "i'm going to make a new post-authoring interface that puts a minimal amount of complexity between the text i write & it reaching the web"

so using tiddlywiki i "refactored" my wordpress blog into a simple static page generator making the same html, but now only "on" for just seconds to generate each page, unlike always-on php/mysql - & every letter of the html controlled /easily/

@mala

sandstorm is basically a virtualisation program where web apps are packaged like "desktop applications" that can then "author" simple containers as "files" ("grains").

so - the container users use & can easily backup/restore only contains user data (easily openable as a zip), while application binaries/libraries live in an application package also installed on the same server by users (!). users can also obtain source for any app package and upload a modified version instead

@mala

i like sandstorm because of the similar "always off" design where web apps leap up from their package & serve to you Only when a grain opens

but what makes it relevant here is the feeling (if not quite fact.) that you could take any app package, open it & mess with it all you like to do what you want it to do,
and then put the forked one back without any chance of messing up your data for the old version (or your OS.)

something you could never do with e.g. mysql on a bare server.

Show more

@mala I'm not sure I fully grok the question but:

- If you're asking how to make scraping FB less fragile ...
- So long as FB control the presentation / format / protocol / API, you're hosed.
- Creating a set of external dependences _larger than FB_ that impose a stability on it ... might help. Effectively, take its standardisation definition power from it.

Alternatively: don't rely on FB as a datastream.

Or: Minimise the data exposure as much as possible, simplify all structures.

@dredmorbius @mala

I'd note that FB has to remain intelligible to its users.

And since we are more or less as smart as those users, FB can't really lose us without losing its users.

(They can still make it real hard to keep up, of course.)

@Jens @dredmorbius yes: another part of what is changing is the rate of progress in various sectors. How fundamental and how fast can Facebook change when it is past its period of rapid innovation?

@mala There's an analogue to Chesterton's Fence here.

Rather than wondering what the _reason_ for the fence's construction being built, for ... phenomena or behavioural patterns which seem to recur or persist across centuries and millennia, it's more Chesterton's Landscape: why *does* the ground become shaped this way?

Because fighting those forces will likely prove difficult. Though perhaps not impossible.

@mala Extending this:

If you _directly oppose_ the forces in question ... you'll almost certainly fail.

If you work _with_ them to try to achieve a preferred goal or state ... your odds improve markedly.

Much social engineering seems to fall into the first trap.

I think it's environmental engineering which largely employs the "work with natural forces to designated goals" approach:
en.wikipedia.org/wiki/Environm

@mala The dark side of VMs and containers is that they don't _reduce_ complexity but only _wrap it up in something_.

That complexity is still there, and you end up having to deal with it. Maybe somewhat more easily (it's all in one package), but ... say, having to update a whole mess of containers rather than OS packages.

You've also got system images and exploits of them to worry about.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!