"Blessed by the Algorithm" — I can almost hear an apocalyptic Muse song in my head, as regular people slowly start to venerate the computer gods
https://mastodon.social/media/UVFGozJyKSNF7TiBv90
@hisham_hm Blessed be the algorithm. Praise their infinite wisdom and knowledge. The algorithm is love. The algorithm is life.
@hisham_hm
"Hail the Omnissiah! The God in the Machine, the Source of All Knowledge."
@hisham_hm Under His Eye.
@hisham_hm I'm thinking "Slave to the Algorithm" instead
@hisham_hm
The Algorithm is but an apostle of RNGesus.
@hisham_hm transhumanists_irl
@hisham_hm It's because of the "you push a button, two pancakes come out! simple! what's here to be understood?" approach, I think.
Another factor is that nobody knows how machine-learning algorithms work, not even their programmers.
@Wolf480pl @hisham_hm nah, its a cleverly cultivated myth nobody knows how ML works. It all spelled out in relatively simple maths (non-linear trigger functions and the like). Its just that its not the linear factors model that people are used to consuming.
@phiofx @hisham_hm yeah, but does anyone inspect all the coefficients in every one of the 1000s of neurons? And does anyone know why the network's answer for question $X was "yes"?
@Wolf480pl @hisham_hm
you can literaly learn everything and anything about it if you want to. for example you can find a simpler model that is "close enough". most are just logistic regression on steroids :-)
but people will not volunteer to do things that will either reveal the banality of their all powerful "Algorithm", or the data they are collecting, or the fragility of the models etc. etc.
this is not a game between algorithms and peope, this is a game between people and people
@phiofx @hisham_hm so you're saying that it is possible to train a model on some trash data from the internet, then study the model, and learn what the model "thnks" and what it learned, to the point that you can predict the model's response on some input, and pick inputs that will trick the model into doing something?
this is not such a big deal (or I misunderstand something). when people develop models they usually develop a whole family, including simpler ones. The predictions of different models are generally correlated. the idea is that added complexity helps improve performance (sometimes its just marginal and not robust)
so you can use a simpler model as your guide to anticipate the more complex one. not 100% of the time, for sure, it depends on the domain
@hisham_hm that scared me too
HOLY RITES OF THE OMNISSIAH (light body horror, WH40k) Show more
@hisham_hm Techno-paganism has been a thing for a while.