mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

331K
active users

#cursed

10 posts9 participants2 posts today

I dont believe in curses. But if they are real then I’m certain I am cursed.

Ever since I was little, nothing has come easy to me. Everything has been a struggle. A fight. A battle. Whether it is with my mental health or my physical health or even socially. Everything in life has been a long arduous struggle to do simple tasks that others seemingly float through.

Do you think a 7 year old steal a $100 bill? Or would they even realize its value?

On our 100th episode of the podcast, a hard-working single mom finds a $100 bill and she can't believe her luck. When the bill returns to her the next day after having spent it, she really can't believe her luck! But when she realizes the bill is causing harm, she starts to question how lucky she really is.

Listen now: https://bit.ly/44r82tC

#AlmostPlausible #100DollarBill #Lucky #Cursed #Podcast #Comedy #Improv #Screenwriting #Movie

Vue lets you define generically-typed components, but TypeScript is a compile-time language (it gets fully removed by runtime) so how does this work?

You instruct the Vue compiler to treat the component file as generic by giving it a
generic="T" prop, but that's not the cursed part. The cursed part is how you explicitly pass a type.

Since you cannot include a type directly in html code, the actual way the docs say to pass the type is via an html comment
💀

<!-- @vue-generic {import('@/api').Actor} -->
can't make this shit up
#javascript #cursed

Vue lets you define generically-typed components, but TypeScript is a compile-time language (it gets fully removed by runtime) so how does this work?

You instruct the Vue compiler to treat the component file as generic by giving it a
generic="T" prop, but that's not the cursed part. The cursed part is how you explicitly pass a type.

Since you cannot include a type directly in html code, the actual way the docs say to pass the type is via an html comment
💀

<!-- @vue-generic {import('@/api').Actor} -->
can't make this shit up
#javascript #cursed

Cursed homelab update:

I now have a (slightly broken) PiHole!

Default settings only as it's settings are in an ephemeral volume, I want to put them in CephFS but can't, _yet_, as I need to check if Rook fucks with existing files in a CephFS it can add volumes to. I only have two CephFSes, and don't want to add anything permanent to #2 until I've verified that giving it access to #1 won't accidentally delete things.

I am being paranoid here and procrastinating, this is literally 10 minutes work and I've been putting it off for months as RBD volumes have been ideal for everything I've done so far.

Cursed homelab update:

So server #2 is humming along nicely, however continuing to use the disk that nearly scuttled my repartition / recovery effort was not a good idea.

Me: creates a Ceph OSD on a known faulty hard disk
Faulty hard disk: has read errors causing a set of inconsistent PGs
Me: Surprised Pikachu face

Thankfully this was just read errors, no actual data has been lost.

So for a brief, glorious moment, I had just under 50TB of raw storage and now it's just under 49TB.

And for me now, the big question is: do I do complicated partition trickery to work around the bad spots (it's a consecutive set of sectors) or do I junk the disk and live with 1 less TB of raw storage?

In other news, I know understand a little bit more about Ceph and how it recovers from errors: PGs don't get "fixed" until they are next (deep) scrubbed, which means that if your PGs get stuck undersized, degraded or inconsistent (or any other state) it could be that they're not getting scrubbed.

So taking the broken OSD on the bad HDD offline immediately caused all but 2 of the inconsistent PGs to get fixed, and the remaining 2 just wouldn't move, so I smashed out a trivial script to deep scrub all PGs and last night, a couple of days after this all went down, one got fixed. Now hopefully the other will get sorted out soon.

ceph pg ls | awk '{print $1}' | grep '^[[:digit:]]' | xargs -l1 ceph pg deep-scrub

So read errors -> scrub errors -> inconsistent PGs.

Then: inconsistent PGs -> successful scrub -> recovery

What this also means is that while I stopped the latest phase of the Big Copy to (hopefully) protect my data, I think I can start it again with some level of confidence.