Hisham is a user on mastodon.social. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.
Hisham @hisham_hm

"Blessed by the Algorithm" — I can almost hear an apocalyptic Muse song in my head, as regular people slowly start to venerate the computer gods
mastodon.social/media/UVFGozJy

· Tusky · 66 · 81

@hisham_hm Techno-paganism has been a thing for a while.

@hisham_hm Blessed be the algorithm. Praise their infinite wisdom and knowledge. The algorithm is love. The algorithm is life.

@hisham_hm
"Hail the Omnissiah! The God in the Machine, the Source of All Knowledge."

@hisham_hm I'm thinking "Slave to the Algorithm" instead

@hisham_hm It's because of the "you push a button, two pancakes come out! simple! what's here to be understood?" approach, I think.
Another factor is that nobody knows how machine-learning algorithms work, not even their programmers.

@Wolf480pl @hisham_hm nah, its a cleverly cultivated myth nobody knows how ML works. It all spelled out in relatively simple maths (non-linear trigger functions and the like). Its just that its not the linear factors model that people are used to consuming.

@phiofx @hisham_hm yeah, but does anyone inspect all the coefficients in every one of the 1000s of neurons? And does anyone know why the network's answer for question $X was "yes"?

@Wolf480pl @hisham_hm
you can literaly learn everything and anything about it if you want to. for example you can find a simpler model that is "close enough". most are just logistic regression on steroids :-)

but people will not volunteer to do things that will either reveal the banality of their all powerful "Algorithm", or the data they are collecting, or the fragility of the models etc. etc.

this is not a game between algorithms and peope, this is a game between people and people

@phiofx @hisham_hm so you're saying that it is possible to train a model on some trash data from the internet, then study the model, and learn what the model "thnks" and what it learned, to the point that you can predict the model's response on some input, and pick inputs that will trick the model into doing something?

@Wolf480pl @hisham_hm

this is not such a big deal (or I misunderstand something). when people develop models they usually develop a whole family, including simpler ones. The predictions of different models are generally correlated. the idea is that added complexity helps improve performance (sometimes its just marginal and not robust)

so you can use a simpler model as your guide to anticipate the more complex one. not 100% of the time, for sure, it depends on the domain

HOLY RITES OF THE OMNISSIAH (light body horror, WH40k) Show more