Follow

Oh, was having issues and brought a lot of websites down with it? That's not great.

But thankfully it's not that hard to roll out your own "DDoS protection" (or caching; just call it caching) setup. Here, you can even use our configs: git.occrp.org/libre/fasada

Yes, we use them in production, serving sometimes hundreds of thousands views per day.

Patches welcome.

One thing I'd love to see is hackers, NGOs, and small media orgs pooling their resources and doing edge caching for each other on their servers.

Most of the time most servers do not use all the bandwidth, so this could help handle peaks in traffic.

Two things would be needed on the tech side: 1. websites becoming more easy to cache (do you really need this bit of dynamic content there?), and 2. some way of doing TLS termination without giving out private keys.

Obviously, 1. is a decision that needs to be made by individual website owners.

And 2. has already been figured out by... CloudFlare (oh, the irony!):
cloudflare.com/learning/ssl/ke
blog.cloudflare.com/keyless-ss

We need a clean-room, FLOSS implementation of Keyless TLS. Nudge. Wink.

So far this has been a good discussion. Cool!

Let me add one more thing: if I had the time (which I might) I would invest it in building something based on Gun, IPFS, and BitTorrent, in (ugh...) JavaScript, so that a website gets loaded once, the JS crap gets cached, and then new content gets pulled from IPFS or BitTorrent (using Gun as a resolver to point to newest torrent and IPFS address) if HTTPS happens to be blocked.

@rysiek You can do SNI-based reverse proxying with haproxy, which doesn't require the front-end proxy having the private keys, instead leaving most of the TLS negotiation to the backend service -- I'm not sure if this is quite the same as Cloudflare's "keyless TLS", but I think it's similar.

@molly oh, that sounds about right! Good to know, thank you. Looks like I need to do some research. :)

@molly @rysiek SNI based pass-through is quite different from the above article.

The above article describes the "front" being able to see the entire request and serve any content it desires.

Meanwhile a SNI proxy can't serve any content not originating from the upstream sever.

In fact it cannot even see the real HTTP header. So a SNI header may differ, from the real host header. This can allow some fun exploits, since you may be fooled by the client.

@rysiek caching is the painful topic - nobody wants to cache anything as each request cached means one precious unique view lost from sight of crappy 3rd party analytics JS :)

@kravietz dear Sir, do you have a moment to talk about our Lord and Saviour, Log Analytics?
matomo.org/faq/log-analytics-t

Also, the JS bug pings the analytics server with the view data. So, caching doesn't really make web analytics harder. :)

@rysiek curl -v facebook.com/| grep cache-control

cache-control: private, no-cache, no-store, must-revalidate

Not everyone uses Matomo these days :(

@rysiek Unless the NGO is massive or tech specialized, they're not going to have the resources to do this or even think about it. Some of them can barely get a website up and running

@trebach sure, but for those NGOs that have the resources, now they don't have to reinvent the wheel.

@rysiek remember when Google tried to implement point 2 and everyone in the infosec community tried to explain what a terrible idea it was?

@ben @rysiek yes! If you sign your site's content, google can now impersonate you! Just don't ever update your article or have XSS issues on your site, because there is no way to revoke it properly!

@aurelia @ben huh?

"If you sign your site's content, google can now impersonate you!"

Explain?

@rysiek @ben Chrome will show your URL if you signed the request, even if it's served from AMP

@aurelia @ben yeah, not a fan of centralized services. Therefore not a fan of AMP.

A fan of static site generators, on the other hand.

@rysiek I remember you mentioned giving a talk about this at a conference. Is there a recording of it?

@njoseph sadly, no... But I hope to give some more talks about this.

@rysiek depending on the infrastructure varnish can be very helpful but i can totally see why it's not a good solution in your use case. just pointing things out ;)

@rysiek

"But we did look at varnish, and we found it does not support SSL and has no intention to. We would have to run nginx in front of varnish that would then be in front of our upstream nginx servers. This is madness."

Well, no. You have to use a dedicated SSL-terminator like Hitch-TLS or Pound. Not NGINX

@selea thanks, we'll have a look! Didn't know about it (and somehow didn't find it when doing research for this project 4y ago)

@rysiek

Ah, 4 years ago basically the one tls-terminator in existence was pound (which works ok)

I heard about Hitch-TLS 3 years ago while it was still new.

Hitch-TLS uses the Proxy Protocol that Varnish 5+ also uses which is kinda nice :)

@selea @rysiek "hundreds of thousands of views a day" is still only tens of views per second. Apache on a ten year old laptop can serve 10x that without struggling. Not saying there *aren't* people who need cloudflare, but the bar is probably higher than J Random Webdev thinks it might be

@telent @selea we did some tests with Joomla (yeah, let's not get into why Joomla) and on a pretty beefy server we were not able to get above 18-20 requests per second - regardless of how many php-fpm processes we were running, or how optimized the database set-up was. Available CPU power was the limiting factor.

Putting this nginx config in front of it immediately made the bandwidth the limiting factor. And allowed for more than an order of magnitude more requests handled on the same server.

@rysiek

Just to be clear, this was a couple of years ago right?
How did the setup look like?

@telent

@selea @telent a pretty modern 12-core server (I'd rather not go into too many specifics here).

But I would love some independent benchmarks made. Happy to help with that, too.

We've been able to weather pretty huge traffic with these settings.

@selea @telent in the end, it doesn't matter that much if it's nginx or hitch or whatever, as long as it gets the job done.

This particular set-up works very well for us, and lets us avoid using CloudFlare and such, so we wanted to share. If somebody publishes a working set-up for Hitch or whatnot, achieving similar goals, we'll definitely look at it, too.

@rysiek

"and lets us avoid using CloudFlare and such" - 100% approve!

@telent

@aeveltstra @selea @telent write.lain.haus/thufie/dont-tr (this section: "The fundamental issue with CloudFlare and similar services" and further).

tl;dr CloudFlare already handles about 10%(!) of global web traffic. That means they became the gatekeeper. And gatekeepers are a dangerous thing.

Basically agreeing with... CloudFlare's CEO here:
blog.cloudflare.com/why-we-ter

It's dangerous to have one entity have so much power.

@jlelse

How often do you get DDoSed?
Many ISP's offer DDOS protection anyway.

@telent @rysiek

@selea @jlelse @telent we did get a few DDoSes, but again, not eager to go into specifics.

From my experience (and that was the title of my IFF2019 talk), "it's not a DDoS, your site just got a bit popular and you're running a dynamic CMS".

If I had a dollar for every time somebody came to me with "ERMAHGERD WE'RE BEING DDoSED!!1!" only to find that e're talking about ~3-4 req/s on a WordPress site I would have a large positive number of dollars. ;)

@selea @jlelse @telent if your site is getting serious DDoS, by *all means* go to a DDoS protection vendor -- although I am going to say your hosting provider's DDoS protection coupled with some decent caching will probably solve it anyway.

In such a case I would suggest Deflect.ca instead of CloudFlare though.

@rysiek @selea I think I was actually agreeing with you, sorry if it didn't sound like it! Caching your dynamic content sensibly at source is a viable strategy for 99.x % of sites, even running on "commodity hw". Back in 2009 I had nginx->varnish->a Common Lisp webapp, no "static" content at all, and cache lifetimes entirely controlled by the cache-control headers I was sending from the origin server

@selea well, we kind of relied on Varnish's own documentation and blogpost on TLS and why it will not implement it. :)

Either way, nginx as a TLS terminator and a caching edge proxy works for us *very* well.

@rysiek if you think that - nginx with caching - is all there's to DDoS protection... it's really not. (fun fact, mastodon's default nginx config does that) Having L3 equipment that can handle the raw bandwidth that some skids can conjure up is the hard part.

Funnily enough the cloudflare article on keyless TLS you linked talks about that.

"Whether it was their load balancer, their firewall, their router, or their switch, under attack, something had become saturated and was unable to keep up with the traffic. It didn't matter how clever the software on the device was, in every case they were dead at Layer 3."

@aurelia if you think that I was claiming that - nginx with some caching - is a bullet-proof solution to DDoS... I really did not.

I perhaps should have made it more clear, but that's why there were quote marks around "DDoS protection". Again, if you're site is actually being actively DDoSed, yes you might need proper DDoS protection. But for 99% of cases, it's really just your CMS-based site getting a bit popular. That's where some decent caching will help.

@aurelia as others have already noted in this thread, most hosting providers already offer some DDoS protection. What they can't help with is organic traffic that brings the website down, because, well, CMSes are what they are. That's were our config helps us greatly.

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!