Here's the ticket for more context
Currently I am doing this by keeping the relevant information in Indexed DB. This has drawbacks:
- data is the same for all browser windows, leading to potential confusion if there is more than one tab open using the ServiceWorker
- there are no events to hook to catch when Indexed DB data changes, so it's down to setInterval() method, which is fugly.
I *could* use Client API, specifically `postMessage` with `FetchEvent.clientId`, but clientId is not implemented on Safari (both Desktop and Mobile):
I *could* use MessageChannel API, but it requires setting up a channel between browser window and the SW, and there's no way to track which channel is used for which browser window.
Plus, SW is quickly reaped, context destroyed, channel killed. On a new fetch() ServiceWorker restarts but the channel does not work, so a new channel would need to be set-up.
But that can only happen from the browser window side, whereas only the ServiceWorker knows a fetch() has started.
I *still* could decide to use MessageChannel API, but would need to:
- keep track in SW which fetch is from which referrer (not sure that's possible even; probably available in Request.Headers)
- keep track which channel is for which URL/referrer
- it would still get confusing if there are two tabs open with the same URP
- and I would still need to do polling in setInterval() on browser window side, kinda defeating the purpose of the channel.
So unless there is a way to hook an event in a browser window whenever a fetch() starts or when all fetch() events finish, MessageChannel API doesn't seem to be better than just using Indexed DB and polling it in setInterval() on a regularly.
And so it doesn't seem it makes sense to use MessageChannel API at all, since either it's not effective, or clientId gets implemented in Safari soon and we should move to that.
But if I'm to re-implement the Samizdatinfo on clientId now, I need a sane graceful degradation strategy for Safari.
But perhaps I am overthinking this? Perhaps the only event I need is onload. At that point I'll know already if the page is loaded from cache or not, and can display a relevant message to the user ("cache in use, try reloading"), perhaps after a sane timeout (letting the secondary fetch() in SW try to finish).
Or perhaps MDN is wrong and #Safari supports Client API?
Proof-of-Concept of the new signalling system done without removing the old one.
Can anyone test on Safari please? Open a new tab, open the JS console, and navigate here:
Then, reload (so that the service worker kicks in); you should see "ServiceWorker: yes" in orange.
Make sure that you see this commit ID in the console and in both places at the page bottom: c223b08c
If all of this is true, check if in the console you have messages saying: "SamizdatInfo received!"
Done some serious work on #Samizdat. Fixed some bugs, almost finished implementing the new messaging system (based on client.postMessage() in the end), ripped the old Indexed DB-based system out completely. Introduced new bugs to fix next.
Merge request here:
Still work in progress though.
Merged! #Samizdat now uses message passing instead of Indexed DB for ServiceWorker to inform the window clients of things. I CAN HAZ nice things, liek:
- info that a resource was fetched from cache, but fetching it via Gun+IPFS is running in background;
- near-instant info on resources being fetched and status of that;
- info when all resources get initially fetched (in the future this is when "stuff fetched from cache, but newer versions available, reload please" message will be displayed).
The Merge Request of Doom:
You might need to reload the service worker (refer to browser docs). Automagic reloading of the service worker code will come... one day, inshallah!
Also, probably doesn't work on Safari, because crapple refuses to implement things. Graceful degradation will come... one day, inshallah!
So I guess the roadmap to #Samizdat 1.0-beta would be something along the lines of:
- fix the issues (like caching plugin use is double-counted; when reloading soon after a load there is no indication how/where the resources were loaded from);
- implement the "stuff loaded from cache but newer content available, reload to see" message;
- cleanup the browser window / UI side of things so that it's easy to include on any site.
A *lot* of work, but hey, now at least we kinda have a roadmap!
Ok, back to playing with #Samizdat after some traveling.
- caching plugin not double-counted anymore;
- finally there is a proper project website at https://samizdat.is/
Need to fix Gun+IPFS for the new domain, today is a good time.
Main project home still https://git.occrp.org/libre/samizdat/ for the time being, but hoping to move it to a public GitLab instance soon.
That means now when you load the site in Firefox you should get the favicon. Favicon does not exist on the server, but exists in IPFS, for the purpose of testing all works.
In Chrome/Chromium it should show up after a reload or two (take your time though, Chrome/Chromium caches things in weird ways).
Woo! That means our migration of Samizdat is complete. It's on it's own domain, and on an open GitLab instance. 🎉 🎈
tl;dr: there needs to be a way to measure how many times Samizdat made it possible to circumvent censorship.
That's something that will have to run on reader's browser, and so there are serious privacy considerations.
But without being able to show it works, it will be hard to convince people (and site admins) it does.
In the meantime, working on cache invalidation for #Samizdat. One of the Two Hard Problems in IT (cache invalidation, naming things, and off-by-one errors)!
Anyway, trying to keep some context in cache using "x-samizdat-*" headers. But the Cache API doesn't seem to cache all headers, just some:
Of course, there is no mention of it anywhere in the docs (or I have not found it after hours of looking).
I *think* I figured out how to do cache invalidation in #Samizdat in a more-or-less sane way, *assuming that* only a single live plugin is in use.
I might have an idea how to do it across plugins too.
Relevant branch here:
Boom! Cache (or, rather, locally stashed version) invalidation implemented in #samizdat https://0xacab.org/rysiek/samizdat/merge_requests/14
From now on if you visit the site once load the current Service Worker, stuff gets stashed, and then when you happen to visit the site on a blocked connection, it is *assumed* Gun+IPFS version is fresher.
If you visit again, and have the Gun+IPFS version stashed, IPFS addresses are compared to check freshness.
If a fresh version is available, a message is displayed to the reader.
What's the difference between a "cached" and "stashed" resource in #Samizdat, you ask? Excellent question!
There can be multiple Samizdat plugins that implement the basic idea of keeping a version of a resource locally. One plugin currently implementing this is called "cache" and uses the Cache API:
So, to avoid confusion, whenever I'm talking in general about keeping versions locally, I will call it "stashing".
This will be made clear here: https://0xacab.org/rysiek/samizdat/blob/master/ARCHITECTURE.md
Worked on the documenation for #Samizdat a bit. Also, started working on implementing the standalone interface. MR: https://0xacab.org/rysiek/samizdat/merge_requests/15
The idea is to have the basic interface defined in samizdat.js so that all an admin needs to do is include that file. Currently the interface is tightly tied to index.html.
And we now have a standalone user UI in #Samizdat:
Check it out here:
Or here, to see it on a page that does not use the regular Samizdat CSS:
The UI only shows up if there are resources that seem to be unavailable via HTTPS (on samizdat.is that's the case with the favicon).
The only thing that needs to be included by website admins is a single JS file (samizdat.js).
Next step: creating a standalone admin UI.
Like measuring usage:
It *seems* like it's complicated, until it becomes clear that 3rd party tracking is not going to be affected by most website blocking scenarios. So the only thing that needs to be handled is when a website is using log analytics or their own tracker.
And the relevant merge request:
Did some code cleanup, and the samizdat-cli now can get a user's pubkey (will be needed later), and *almost* register a new Gun user.
More fun soon!
Working on implementing some basic user management in #Samizdat's samizdat-cli, as a necessary foundation for more sane deployment procedure. Relevant ticket and merge request:
Almost works, but for *some* reason users created using it are unusable. Specifically, it seems impossible to auth() as them. Moar debugging tomorrow. *sigh*
I have no clue what's wrong with my #Samizdat CLI code. When I create a user using samizdat-cli, it's impossible to auth() as that user (neither using the CLI, nor in a browser window):
But if I create a user using the same functions in a browser window, all works fine. I can then auth() as that user both in the browser window *and* via the CLI.
Relevant (fugly!) code here:
I've reported one bug already:
More to come.
Oh, did I write a test harness just for that? Yes. Yes I did:
(GitHub because Gun is hosted there; personally I prefer unifficial Gitlab instances, obviously)
I have a few things I can focus on in #Samizdat once I report all the NodeJS-related bugs (and before they get fixed).
I am very tempted to finally write the IPFS/IPNS plugin (completely side-stepping Gun), or a dat:// plugin. But perhaps I should do some boring stuff from the Beta milestone?
So, a poll! What should I focus on in Samizdat?
And so, the People have spoken. I'll bump implementing dat:// up on the ToDo list for #Samizdat. However, for Beta I really need to have documentation and Admin UI I guess. Eh.
Yesterday I noticed #Samizdat is not working. Spent most of the day debugging. Turns out four things happened at the same time:
- major code changes on my side
- some code changes on Gun side
- Samizdat stopped using the test Gun instance run by @OCCRP
- the public Gun peer started deleting stuff
Ooof! This was pretty damn annoying to deal with, but all is well again. As an added bonus:
- there is a Gun peer running at samizdat.is
- got an idea how to simplify deployment significantly.
I am also more and more considering moving #Samizdat away from Gun. Gun is currently used to map from a well know address ( Gun user pubkey) to the content-adressed resources in IPFS. This can be done using #IPNS.
So far my experience with Gun has been bumpy. It seems a bit easier to use than IPNS, but with all the trouble I've had with it... not sure it's worth it.
I'll probably develop gun+ipfs plugin a tiny bit more, and then move focus to IPNS/IPFS. Added benefit: fewer dependencies.
Had a good chat with Sam from dat:// project about #Samizdat. Got a bunch of good input and great links (including the lunet thing).
Good news: dat:// protocol v2 has a bunch of improvements and is almost ready for being released.
Bad news: dat:// v2 is incompatible with v1, has no pure JS implementation, and it's unlikely it will get one soon.
Ugly news: this means it most likely doesn't make much sense to implement dat:// in #Samizdat at this moment.
Ok, so it might in fact make sense to implement dat:// in #Samizdat, since the API is not expected to change between v1 and v2.
Many thanks to @syntax for his contribution to #Samizdat:
This is a much-needed nudge for me to get back to hacking on this project.
I have taken a way-too-long sabbatical from working on #Samizdat, but finally getting back into it.
First step (making sure pipelines work again) was easier than expected: my #GunJS superpeer was down. All green:
And need to improve how the pipeline verifies stuff is available in IPFS, pretty sure the 504s there are because we get throttled by gateways:
That oversight has just been fixed:
Work on the #Samizdat overview document is proceeding nicely and I am starting to be pretty happy with it:
On friend's advice I shortened the Philosophy section substantially, and expanded on it in a separate document:
As always, comments, suggestions, and patches welcome!
#Samizdat overview now has flowcharts:
I have no idea if they're useful. Only one way to find out!
Meanwhile, I missed the fact that it's been over a year since #Samizdat initial commit:
Okay, people, we have the #Samizdat Overview:
I think it's reasonably complete, and I *hope* it is reasonably informative. Thank you to everyone who provided their input and feedback, couldn't have done it without you.
This is a complex project trying to solve a complicated issue. Boiling it down to a single readable document is Hard. Here's hoping it's good enough, but suggestions on how to improve are welcome, as always.
Still creating more issues than closing, but hey, there's progress!
The big thing done tonight is that there is now a config.js file which enables configuring Samizdat without editing any actual code, and that it can already be used to configure which plugins and in which order are used to handle requests.
Pretty big step towards an easier deployment procedure!
@rysiek the flowcharts let me understand how Samizdat works in the big picture without ever reading any of the text (be it docs or source code)
@rysiek I love the concept, and would love to see this take off. Basically, this is archive.org's Wayback Machine on steroids and caffeine, except focused on current content, right?
In fact, WebArchive *could* be used as one of such endpoints (I am researching how to implement a plugin for WebArchive).
The difference is the focus. While WebArchive's focus is historical preservation, Samizdat's focus is availability. These intersect somewhat, but are in the end different.
@dredmorbius that paren is closed a little further down, but you're right, that's confisuing. Fixing.
@rysiek Cases resulting in no response would be useful.
Explanation of what "alternative transports" are & how determined / accessed (if not already present & I've missed them).
@dredmorbius "Cases resulting in no response"? What do you mean?
"Alternative transport" is any "live" plugin:
The nomenclature needs to be cleared up and made less confusing. Working on it. Good feedback!
@rysiek "Cases resulting in no response"' would be instances in which a user requesting a Samizdat-enabled page would still not see requested content. What's the minimum required accessibility / connectivity required for success.
@dredmorbius ah! Got it. Well that kind of depends on the transport plugins used. I need to describe what kinds of plugins are possible, and what their limitations are.
Load can be distributed to IPFS, or just random Internet locations.
"random Internet locations." could be more clearly explained. How selected, how identified, etc. Possibly in a later section.
@dredmorbius meanwhile, changed to" Load can be distributed to IPFS, or just any Internet endpoint able to host the content."
@rysiek So, suggested clarifications (I'm not sure which are accurate):
or any HTTP(S) site able to host the content
or one or more HTTP(S) sites also hosting the content
or one or more previously established HTTP(S) sites also hosting the content
or one or more previously established HTTPS sites which will dynamically and automatically rehost the content
or one or more previously established sites which will dynamically and automatically rehost the content using
or any any of a mesh netwoork of sites configured to dynamically host the content
What site(s) can / will host content?
What protocol(s) are used? HTTP(S)? Any of a set of TCP data transports? UDP? TLS-capable only? Tor? IPFS is apparently available.
Are these preconfigured and prepopulated? Preconfigured and dynamically populated? Dynamically configured?
Is the user at all aware of alternate sourcing? Are there redirects, or is this all ServiceWorker majick? (I have NFC about Serviceworker stuff, even after scanning docs).
Existing language is ... opaque.
@dredmorbius yup, all of these are goig to become clear (inshallah) once I am done. Great list, helps keeping track of what needs fixing! Thank you!
@dredmorbius trudat. However, while I feel I am allowed to use the term "samizdat" for a project, I would feel rather uncomfortable appropriating "inshallah" for one. Not my place, really.
@dredmorbius let me know if this helps:
any modern browser...
A list of known compatible browsers would be handy. I'm presuming Chrome, Safari, Firefox, and derived browsers, possibly Opera & Konqueror. Probably not Dillo. Lynx is Right Out.
@dredmorbius that's a bit more work, because I have to go back into the code and check all the APIs I am using, and then make a list of supported browsers based on that.
But yes, needs to happen (and help very welcome!).
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!