It’s mostly formed from Seth Schoen and Cory @doctorow ‘s idea of adversarial interoperability, Joanna @rootkovska’s Qubes, and @mntmn’s Interim and Reform projects
with also (in a way that I haven’t been able to entirely compose together) CHERI https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ , and @cwebber ’s work on Mark-Miller-like capabilities.
Sorry, that sounds conspiratorial and I didn’t mean it to. What I mean is that a successful tool or service builds on the browser, or Android/POSIX say, or inside Twitter or Facebook’s ecosystem. Those environments are rich and complex and provide coherent and comprehensible abstractions for doing almost everything that their creators and sponsors want to do (and would like you to do) in the world
In earlier years, we were lobbying and picking and working on the abstraction towers we hoped would lead to a better world, but now it feels like those directions have been buried by the buildings built above them: you can think of this as co-option, but another way may be a narrowing of options after a period of abstractions that tended to general innovation - a post-Cambrian winnowing? Sorry for all the metaphors I’m still trying to name and frame this
I don't know the name for this act, but adversarial interop will do for now. You wire yourself up to the existing abstraction framework, and pull it in a new direction. But you only do that to the degree that the abstraction fails to be able to stop you, and to the degree that you can comprehend what the abstraction presents
Virtual Machines are a good present-day example of this approach: it shows the possibilities, and the challenges. Essentially, VMs enguls almost all of the software stack of entire operating systems. Something like Qubes run them within its (fairly thin) hypervisory world, where it can wrap them in its own model.
Everything else is what the abstraction is *meant to be hiding*.
But we are at a moment where *have* to dig deeper -- and perhaps all this computational power can assist us.
We have an abstraction in which *what we want* -- data, patterns, power, is buried deep deep below us -- and we are trying to build tottering, tall, highly experimental alternatives of our own.
I can run an existing app on my new computing environment, because I can -- given our current powers -- emulate it precisely. But how can my new computing environment *understand* what the app is doing, saying, processing?
It needs to be able to dig deeper: Rather than a bunch of syscalls, we need to be able to recognise certain computations -- turn the emulator's understanding of file and network operations into an understanding of photograph collections, contact updating, calendar sync
In many ways, this is always the overambitious goal of any reverse-engineer, or someone re-writing an existing system.
But, if we were to view this not as a doomed pursuit, but a topic of active research and activism, using modern tools, how deep down could we dive? How large, complex, universal and interconnected are the things we could dredge from the current system?
I ask this, because of its similarity to what modern applied commercial and academic computing is working on. They too, are faced with understanding a large, ever-changing, overcomplex and non-compliant environment, and recognising patterns with it. And then turning those patterns into large-scale understanding.
Only in this case, they are applying it to better understand and therefore us.
And in my case, we are applying it to understand and control our own digital environment.
Well, imagine what's happening with RedoxOS or even something as extreme as Interim or GNU Mes. If you start at the very base of the device abstraction *and* obtain complex objects at the very top of that stack, you might be construct a consistent, high-level OS that skips a lot of the construction work in-between.
God, again I'm sorry, i should have written this as an essay. I hadn't realised how well-formed it was in my head, and how incoherent it must appear when I'm typing it as I go.
Also, disappointing if you thought I was offering an easier solution, rather than just more hard work!
Still, maybe it's provoked some wild thoughts in you too. Let me know! I'm here or danny@spesh.com.
Okay, the next thing I read after this stream-of-consciousness was this Hacker News thread: https://news.ycombinator.com/item?id=22149866
Which seems to be talking in the same rough ballpark.
@mala Dude, I'm happy for a modestly-coherent problem statement / goal specification these days.
I'm not sure if I'm maturing or my standards are slipping.
But again: interesting points, the parts I understood (I _think_ the majority) resonate strongly. Yes, writing as an essay would be useful, and my annoying suggestions were made with the thought that that essay might appear and shore up some weaknesses of the tootstream.
A use tootstreams serve admirably, so no lost effort on your part.
@mala Specifically: tootstreams give you (as the author) insight, pretty much paragraph-by-paragraph, of what works, and what's problematic, within an idea.
A finished essay is a completed and coherent whole. But if you happen to go off the rails or lose the audience in paragraph 3, you never find out. Tootstorms let you guage that, and are a useful drafting tool.
@mala i've dubbed your idea "danny's stack attack" and have some notes for a conversation but wanna let it percolate a bit first.
@crwbot me too! i couldn't sleep last night thinking about it, but I'm not yet sure it's possible or desirable.
@crwbot @mala I see some valuable ideas here and hope I can participate in the next phase of the discussion. In particular there's really some meat in the idea "observe the stack at multiple levels and pull out the connections between things happening at the machine code level and things happening at the higher level" whether that's in the social network running in the browser, or in the network stack, or whatever
@mala It's even better when I'm reading it all backwards!
@mala A *value* of containerisation / virtualisation approaches: by boxing up a thing, you can see more clearly what its inputs and outputs are, and from the rest of the system's perspective, *those are the parts that matter*.
Once you've achieved that modularisation, you can start pereforming surgery. I'd start reducing inputs and outputs (interactions) to a minimum, then dive in and simplify the guts themselves. Though apply your favourite refactoring methodology as you will.
@mala you might want to look at some of the current work in category theory and programming.
There is a course on, links living at work, which after a first run with students will get further written up. But if the question is abstracting complex objects and relating them to other objects, apparently category theory is the answer
@mala "turn the emulator's understanding of file and network operations into an understanding of photograph collections, contact updating, calendar sync"
This sounds an awful lot like document, relationship, task, and process awareness.
Which many have tackled.
(I'm... attemtping a few of these myself.)
Do we really have a solid generic underlying model for these yet?
Also: proprietary software increasingly silos such data away from other apps (and the user) on systems, making interop hard.
if i'm getting this right,
sounds kinda like you're talking about taking the concept of an "API" for a program or library and applying it to all computing?
where one goal is to "syndicate" data inside old systems that won't cooperate through various kinds of "bridge" tools that expose more free and flexible "APIs" to new systems
while reading this one of my main thoughts is,
"wow, sounds a bit like a very abstracted complaint that proprietary social media sites won't use a standard protocol like activitypub and it's hard to bridge them to it because the owners want control to be the only ones scraping the data"
i used to use wordpress but after a while i got So tired of mysql and its unmaintainability that i was like "i'm going to make a new post-authoring interface that puts a minimal amount of complexity between the text i write & it reaching the web"
so using tiddlywiki i "refactored" my wordpress blog into a simple static page generator making the same html, but now only "on" for just seconds to generate each page, unlike always-on php/mysql - & every letter of the html controlled /easily/
sandstorm is basically a virtualisation program where web apps are packaged like "desktop applications" that can then "author" simple containers as "files" ("grains").
so - the container users use & can easily backup/restore only contains user data (easily openable as a zip), while application binaries/libraries live in an application package also installed on the same server by users (!). users can also obtain source for any app package and upload a modified version instead
i like sandstorm because of the similar "always off" design where web apps leap up from their package & serve to you Only when a grain opens
but what makes it relevant here is the feeling (if not quite fact.) that you could take any app package, open it & mess with it all you like to do what you want it to do,
and then put the forked one back without any chance of messing up your data for the old version (or your OS.)
something you could never do with e.g. mysql on a bare server.
basically...... this thread has me imagining some kind of "better version of sandstorm" where the isolation of app data packages and individual user data bundles Perfectly serves the purpose of keeping apps simple and easy to modify + user data easy to read/backup
funnily enough my most used "sandstorm app" is just a containment box for tiddlywiki - a self-editing templating program that extends itself with any html data or code inside, & is effectively the most editable "sandstorm app"
@mala I'm not sure I fully grok the question but:
- If you're asking how to make scraping FB less fragile ...
- So long as FB control the presentation / format / protocol / API, you're hosed.
- Creating a set of external dependences _larger than FB_ that impose a stability on it ... might help. Effectively, take its standardisation definition power from it.
Alternatively: don't rely on FB as a datastream.
Or: Minimise the data exposure as much as possible, simplify all structures.
@Jens @dredmorbius yes: another part of what is changing is the rate of progress in various sectors. How fundamental and how fast can Facebook change when it is past its period of rapid innovation?
@mala There's an analogue to Chesterton's Fence here.
Rather than wondering what the _reason_ for the fence's construction being built, for ... phenomena or behavioural patterns which seem to recur or persist across centuries and millennia, it's more Chesterton's Landscape: why *does* the ground become shaped this way?
Because fighting those forces will likely prove difficult. Though perhaps not impossible.
@mala Extending this:
If you _directly oppose_ the forces in question ... you'll almost certainly fail.
If you work _with_ them to try to achieve a preferred goal or state ... your odds improve markedly.
Much social engineering seems to fall into the first trap.
I think it's environmental engineering which largely employs the "work with natural forces to designated goals" approach:
https://en.wikipedia.org/wiki/Environmental_engineering
(This, you might notice, is also the challenge of alternative political systems. Socialism is tempting -- but you have to build it on capitalism. Anarchism looks promising, but you have to erase millennia of power relationships.)