I'm sure everyone notices the huge delay in the streaming API. I'm basically working non-stop on trying to mitigate that. 4000 toots per hour is just more activity than current processing queue can keep up with
I'm sorry if I wasn't able to answer everyone's messages! My notifications move too fast
@Gargron keep up the good work
@Gargron Good luck, you're doing an impressive work. Thanks for everything.
@Gargron Is there the same limit for global number instances? In other words, if we get thousands of instances, with few user on it, wouldn’t it lead to congestion at the whole network level?
@Gargron Thanks for your job on mastodon.
It's a great internet project !
The name on remote instance to tag someone is tooooo ugly lol. @Gargron
@Gargron thank you for all your work 😇
@Gargron good luck with the job!
@Gargron *salutes* ur doin us a proud <3
@Gargron Thanks for the job !!!
@Gargron I heard that there's a dev chat to join. Is that on Freenode, or somewhere else?
@Gargron the latency is fine!! anyone who complains needs to simmer down! hahaha
you're doing an awesome job and you are the best :D
@Gargron Good luck!
@Gargron we are aware of the load on your project, you're the best <3
@Gargron Keep up the great work. The effort is really showing in the quality of things around here.
@Gargron you know tooting is another word for farting right?
@Gargron You're doing great. Keep up the good work 🙏
@Gargron do you want experienced devops help? I'm very comfy with AWS infrastructure.
@Gargron Thanks for all the hard work. Make sure to take breaks and relax. :)
@gargron good luck! Scaling is hard — but a great problem to have 😆
@Gargron Just curious, what hardware is this instance running on? 4000 toots/h does not seem like *a lot* to me…
OTOH, Ruby is slow :( I think going with a compiled language could easily multiply that with 10.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!