I will be starting the upgrade for all Mastodon servers on Masto.host to v3.2.1
There should be less than 30 seconds of downtime during the process.
You can see the changelog here: https://github.com/tootsuite/mastodon/releases/tag/v3.2.1
Any issues or questions please let me know.
@mastohost no, 3.2.0 to 3.2.1. Seems maybe related to certificates/auth keys, might be specific to the relay software (federating between mastodon instances works fine, or I'd not see your post :-)). As ever, when it works it works and when it doesn't you're on your own; hopefully I'll have time to dig in deeper at the weekend...
@mastohost I changed the image tav from v3.2.0 to v3.2.1 and redeployed the Docker container 😁. Nothing else indicated in the readmes/changelog :(.
@tim so, docker-compose down and docker-compose up -d? you are using tootsuite from hub.docker, right?
@mastohost Kubernetes, so the equivalent using helm. And yes, tootsuite/mastodon.
Rolling back doesn't fix the problem (it just changes it to http/202 errors instead of 401), so it persistently breaks something. Makes me suspect that it's borked wherever it stores the key/certificate it uses to validate the relay's key; whether that's redis or postgres I don't know, but that's where I'll start digging at the weekend...
I do appreciate your taking the time, by the way! Thanks.
@mastohost: After doing a little more digging, I strongly suspect this change is to blame - https://github.com/tootsuite/mastodon/pull/14556
- which somewhat breezily breaks compatibility with other suitably 'old' Fediverse software. Which doesn't seem a super smart thing to do in a point release, at least not the way I understand SemVer...
More interesting to me now is why downgrading back to 3.2.0 didn't seem to fix things 😐
If you have some custom setting for "/.well-known/webfinger" that could also affect you. Not sure how you are handling that in Kubernetes.
But yep it can also be https://github.com/tootsuite/mastodon/pull/14556 although it says that is compatible with Mastodon.
If you can find the culprit that would be great information to share on GitHub with the dev team.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!