So there's this thing going around that NNs can be fooled into misclassifying images by introducing some deterministic "noise" (e.g. https://tinyurl.com/kno54zh ). This is a known thing and there are tools for dealing with it (or exploiting it vis adversarial learning). However the best technique is to just take a breath, don your powergloves, and grow a mohawk. We don't have to fight the 80s cyberpunk future, we can embrace it.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!