Oh, was having issues and brought a lot of websites down with it? That's not great.

But thankfully it's not that hard to roll out your own "DDoS protection" (or caching; just call it caching) setup. Here, you can even use our configs: git.occrp.org/libre/fasada

Yes, we use them in production, serving sometimes hundreds of thousands views per day.

Patches welcome.

Follow

One thing I'd love to see is hackers, NGOs, and small media orgs pooling their resources and doing edge caching for each other on their servers.

Most of the time most servers do not use all the bandwidth, so this could help handle peaks in traffic.

Two things would be needed on the tech side: 1. websites becoming more easy to cache (do you really need this bit of dynamic content there?), and 2. some way of doing TLS termination without giving out private keys.

Obviously, 1. is a decision that needs to be made by individual website owners.

And 2. has already been figured out by... CloudFlare (oh, the irony!):
cloudflare.com/learning/ssl/ke
blog.cloudflare.com/keyless-ss

We need a clean-room, FLOSS implementation of Keyless TLS. Nudge. Wink.

So far this has been a good discussion. Cool!

Let me add one more thing: if I had the time (which I might) I would invest it in building something based on Gun, IPFS, and BitTorrent, in (ugh...) JavaScript, so that a website gets loaded once, the JS crap gets cached, and then new content gets pulled from IPFS or BitTorrent (using Gun as a resolver to point to newest torrent and IPFS address) if HTTPS happens to be blocked.

@rysiek You can do SNI-based reverse proxying with haproxy, which doesn't require the front-end proxy having the private keys, instead leaving most of the TLS negotiation to the backend service -- I'm not sure if this is quite the same as Cloudflare's "keyless TLS", but I think it's similar.

@molly oh, that sounds about right! Good to know, thank you. Looks like I need to do some research. :)

@molly @rysiek SNI based pass-through is quite different from the above article.

The above article describes the "front" being able to see the entire request and serve any content it desires.

Meanwhile a SNI proxy can't serve any content not originating from the upstream sever.

In fact it cannot even see the real HTTP header. So a SNI header may differ, from the real host header. This can allow some fun exploits, since you may be fooled by the client.

@rysiek caching is the painful topic - nobody wants to cache anything as each request cached means one precious unique view lost from sight of crappy 3rd party analytics JS :)

@kravietz dear Sir, do you have a moment to talk about our Lord and Saviour, Log Analytics?
matomo.org/faq/log-analytics-t

Also, the JS bug pings the analytics server with the view data. So, caching doesn't really make web analytics harder. :)

@rysiek curl -v facebook.com/| grep cache-control

cache-control: private, no-cache, no-store, must-revalidate

Not everyone uses Matomo these days :(

@rysiek Unless the NGO is massive or tech specialized, they're not going to have the resources to do this or even think about it. Some of them can barely get a website up and running

@trebach sure, but for those NGOs that have the resources, now they don't have to reinvent the wheel.

@rysiek remember when Google tried to implement point 2 and everyone in the infosec community tried to explain what a terrible idea it was?

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!