The lesson of this story (of a Catholic blog using commercially resold Grindr data to out a gay priest) is *either* that anonymised data can always be de-anonymised (pretty much the intuition of lots of experts I know), or, less generally, you can't expect an org that benefits from selling other people's data to calibrate how much they should spend on anonymising.
It's not just companies, mind you. The census faces similar challenges -- they work hard to inject noise into their public data so you can't draw conclusions about individuals. But the more noise you introduce, the less useful the data is for, e.g., measuring racial injustice. https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/disclosure-avoidance.html
With the qualities that digital information has, you really need to treat it like toxic waste. When we say "Information wants to be free", it's not a rallying cry, it's a description of an (often dangerous) quality of digital data. It's like saying "U²³⁵ wants to be fissile."
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!