I have restored the ability to upload files on mastodon.social by using flexify.io as a proxy in front of the old Wasabi bucket and a new DigitalOcean Spaces bucket.

Retrieving files will still be wonky, unfortunately, as flexify.io checks both storage providers at the same time, and Wasabi is obviously timing out.

It's also not cheap to do this.

I am not sure if I made the right call here. Yes, uploads work again, but the media traffic passing through flexify.io* is $1/hour, and I am starting to get doubts if DigitalOcean's 150 PUT requests per second rate limit will suffice for us...

* The reason for using flexify.io is to pass DELETEs to both Wasabi and DigitalOcean

Follow

I should pull the plug on that now, shouldn't I? And return us to broken Wasabi. $1/hour for a completely indeterminate time is just too much...

@gargron

Was it even working?

I'm on Octodon and I can't see that image you posted in this thread. I can't see it now, and I couldn't see it when you posted it.

@apLundell It was working in the sense of letting us upload stuff, but fetching was still semi-broken

@Gargron
I suggest you wait for Wasabi to recover. This is also a question about centralisation. Moving to a more expensive, centralised place in a hurry won't solve much. That will put additional burdens on you soon. A long term solution would be limiting the number of users on mastodon.social and encouraging more instances.

@Gargron Does require an external storage provider to host images - by design? Or is it just a choice a site sysadmins would make? How feasible will it be if a large site like mastodon.social decides to do this on their own servers, or in a place they control? Is it something only a small-scale site can do? I'm sure you must have considered this in the past. But I am asking since this issue has come up now. And also because I am curious to know :)

@lohang Local storage is possible but not in all setups. Because mastodon.social uses load balancing between different physical machines to handle the load, there is no shared file system, so a dedicated file storage is needed. This can be (and used to be once) self-hosted Minio, but dynamically increasing its capacity is impossible.

@Gargron @lohang
You did some experiments with IPFS a while ago. ImageMagick was a blocker if I remember correctly... But still could it be a long term solution?

@zippoh @lohang When you add something to IPFS you are responsible for hosting it until someone decides to "pin" it too, so if you have 3.5 TB of data you need a file system with 3.5 TB of space, at least in my understanding. IPFS helps distribution but not storage itself.

@Gargron This finally looks like a reason to use #DataShards at least for media, if we can figure out a good discovery/ routing mechanism.
/cc @lain

@schmittlauch @Gargron i agree, we are looking into adding datashards for exactly that in the future.

But first we have to get the 1.1 release out of the door :)

@lain @Gargron @schmittlauch

This will not hold me from going on with Canonical S Expressions for JS. Now. It is just the first step. But it is a step.

@Gargron Pull it. We'll survive until Wasabi gets sorted out. It's like when the power goes out and everyone plays charades to pass the time.

@franciecashman @Gargron Agree. If this lasts longer than a certain time, perhaps alternatives should be looked at. but as i understand, this is due to wasabi

@Gargron why not try exoscale.ch ? I use them and it works fine, but at the same time I don't have an instance that big :)

@Gargron
That's an insane rate, I say pull the plug and wait for wasabi. I just recently started using this Mastodon instance for real and would hate for it to go down due to costs.

I'll try to pledge on patreon to help

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!