"Using LEGO robots to study posthumanism at the University of Alberta. Part 1: Parable of the ant
Music: Phillip Glass - Metamorphosis One"
https://www.youtube.com/watch?v=MtNZYIeklds&list=PLADA9C21F511A7BF6 https://mastodon.social/media/35csIf0o5kofaCohlmk
"Walter's tortoises represent the first real world demonstration of artificial life."
https://www.youtube.com/watch?v=lLULRlmXkKo
Just some kid mesmerized by "B.E.A.M. robots" trying to recreate living things with a lego set.
I latched onto the concept of environment as feedback loop. I independantly recreated the concepts behind Walter's tortoise and Braitenberg's vehicles.
While schools cargo-culted ideas behind line-following robots and fetishing turtles, I was studying neurons.
Walter was a neurologist.
"Whats happening here? Is it something electronic? Something Mechanic? Are we modifying the world? Maybe it's a combination of all of them..."
https://www.youtube.com/watch?v=7ncDPoa_n-8
"proctoring a silicon species into sentience, but with full control over the specs. Not plant. Not animal. Something else."
"I see them as the components of a programmable ecology. They'll replant forests, hunt cockroaches, monitor poachers, cut your grass, clean your pool, polish your floors - all invisibly, dependably, for years."
Computational Metapsychology
https://media.ccc.de/v/32c3-7483-computational_meta-psychology
"This is why I talk to myself using social media.
It lets past me communicate with future me."
https://mastodon.social/@ultimape/14359586
"The idea of the Turing test is that you build a computer that can convince its audience that it is intelligent. But when we look for intelligence, we actually look for something else: we are looking for systems that are performing a Turing test on us."
https://www.youtube.com/watch?v=3Bns4HkAniA
*Looks around, Sees how impressed everyone is...*
Actually, I even doubt my own intelligence and perform the Turing test on myself.
I seem to pass, but results are inconclusive: I may or may not be a p-zombie with false memories.
Combine Braitenberg's vehicles with B.E.A.M. robots and try to recreate a minimal set of swarm oriented primitives in a cellular like manifold?
I have yet to see an object detection algorithm that works on video that isn't simply a single image detection pass. Even the latest on TED https://www.youtube.com/watch?v=Cgxsv1riJhI is just a single image detection system running in real time.
Is anyone working on Object Permanence models of machine cognition?
https://en.wikipedia.org/wiki/Object_permanence
I've also not seen anything using Optical Flow that even approaches what we know about human vision systems approach it. This thing with bees is close:
https://www.youtube.com/watch?v=olZY7yBD0w4
An object permanence model would solve issues like "based on the past detection, I am 90% confident I am inside therefore if I see a door I shouldn't tag the image as "outside"... or "hey, that person is wearing a stopsign on his t-shirt, this isn't an actual stop sign"
The stuff they are doing at skydio has the potentially to move in that direction: https://www.youtube.com/watch?v=P3E4pl2Weos
Esp. if they combine their tech with https://www.youtube.com/watch?v=H7Ym3DMSGms
Can someone pay me to build swarming robots. I have been thinking about swarming robots since I was 11.
Collaborative ratcheting Bayesian sensory networks. https://www.cambridge.org/core/journals/psychological-medicine/article/what-is-mood-a-computational-perspective/5FA0177A965FF3EE01D4AA5C09C0A2A5
One of my most detailed memories of my childhood was a game I'd play where I'd imagine what it was like to 'see' as a hive mind like creature - multiple views and bodies with different thoughts, yet all collaborating. Thunder Cats and Power Rangers, but I was the zoid.
I would spend hours just imagining things - worlds, people, animals - then building them out of legos and lincoln logs. Entertaining myself, alone.
All of this was before I had access to a TV beyond PBS.
https://www.youtube.com/watch?v=OFzXaFbxDcM
I realized that most of our machine learning models for emotion detection are using still frames. You can freeze a frame of someone talking and the computer will detect that they are happy, angry, or other glitched emotions because of the way the face moves for certain words.
Training a machine to make you smile might only make you talk.
Pathological cases: say cheese over and over or else it gets the hose again.
A different issue with machine learning of face=>emotion is that many faces used in databases are biased toward emotions from white anglio-saxon males.
Most don't consider differences between fake smiles either.
Do any amount of research on cultural expressions of emotion and you'll find many of them are learned behaviors. I say this as a face-blind autistic who's trying to learn to read emotion myself.
Stuff like this https://www.forbes.com/sites/johnnosta/2018/01/04/google-glass-is-a-hit-for-children-with-autism/ "works" but it is limited to particular cultures.
If you dig into the source of many of these face datasets: they often don't include people with facial abnormalities or emotive variation. There's all sorts of disorders that lead to physical issues with the muscles of the face. Stuff like Moebius Syndrome, or other muscle issues.
You can test this yourself. Take stills someone talking on YouTube and run it thru Microsoft's face API: https://azure.microsoft.com/en-us/services/cognitive-services/face/
I often smile with only my eyes. The forced grin thing my culture does is lost on me.
Smiles themselves are fascinating from a psych perspective: http://www.businessinsider.com/a-neurologist-explains-how-to-spot-a-fake-smile-2016-7
But there is also a layer of communication and game theory involved if you really wanna dig in: http://users.ox.ac.uk/~kgroup/publications/pdf/scharlemann_etal_2001_jeconpsy_smile.pdf
Part of it might explain why American's have a weird smile that creeps out tourists. https://www.youtube.com/watch?v=ojbJrdkPhGg
All sorts of weirdness seems to pop up around how cultures read emotions too. The bias in autism's attention on mouths https://academic.oup.com/scan/article/1/3/194/2362950 has been particularly illuminating to me.
This isn't to say i'm against advancements in machine learning emotion detection. I am just not as excited about the current methods helping me overcome my issues.
"we urgently need tools to help individuals who have both autism and alexithymia understand their own and other peopleβs emotions."
https://spectrumnews.org/opinion/viewpoint/people-with-autism-can-read-emotions-feel-empathy/
I *am* afraid of what people are doing with these shit datasets. Do they think I'm angry all the time because I don't smile? I get that enough from humans, thank you very much.
Things I've been absolutely fascinated with: #governance and #governancestructures, #socialnetworks, #gametheory, #incentive systems and mechanism / #gamedesign, #decentralized anything, organizational #rituals, #advertising theory, #information #networks, #psychology, #sociobiology, #complexsystems theory, #emergence, #swarmcognition, #groupformation and #autopoiesis, risk reward processing, and the nature of #homeostasis in living systems.
Basically I wanna put sentient robotic ants on mars.
I wanna hack this thing so bad.
https://www.youtube.com/watch?v=uhyM-bhzFsI
I was obsessively playing N.E.R.O. in college. Training a NN thru stages of skills was a fascinating idea.
I learned two things:
1) The simulation mode did not consider other agents / have collision detection on them. So their NN developed artifacts and could not develop complex social interactions.
2) Game had a subtle memory leak. Over time the NN was systematically trying to avoid it. It started taking longer to trigger between crashes.
TL;DR: Utility functions do not exist in a vacuum.
None of this suprises me:
"In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the programβs memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score."
http://aiweirdness.com/post/172894792687/when-algorithms-surprise-us/
I'm following a bunch of publications in NLP/image recognition on the topic of trying to decide context to infer meaning thru language. One of the core research scientists in this field is into GAI, his focus seems to be on using it as a way to generate a "common sense" proxy for making models of reality.
Me? I both intentionally and uintentionally write things with more than one meaning and act as if all detected inferences are possibly true.
Is this a picture of a bunny, or a duck?
yes.
These ideas are, afaict, the basis of facebook's translation AI and image detection algorithms. So of course I am trying to come up with ways to describe images in obtuse ways that would still technically be true.
I found a great example of how this works: https://i.imgur.com/O9UDXeD.gifv
Things get massively more interesting if you consider homophones as a way to skirt censorship of language.
http://comp.social.gatech.edu/papers/icwsm15.algorithmically.hiruncharoenvate.pdf
The next level of anonymity is to perhaps invent entire worgles as a sort of coded language.
"... the synesthetes outshone the controls in guessing the meanings of the unknown [worgles]."
http://www.scientificamerican.com/article/some-rules-of-language-are-wired-in-the-brain/
remumbar falks, speeling is arbrutarital and cromlunlent. Trenin un AI to stand under are worgles cane bee subertorgles.
The AI machine gods will understand us all thru the surveillance panopticon.
But we can perhaps hold out longer adapting our exofmation faster than research norms can keep up.
"This is the modern art; [...] vague has power [...] unpacked on the other end by people's hopes & wishes & dreams."
https://www.youtube.com/watch?v=F9zfCMp99-A
What is a rose, would an adversarial AI built by the other side still spell out sweet?
"Weβre trying to build more than 1.5 billion AI agentsβone for every person"
They are literally creating models of us to predict what each person is going to click on to sell more ads.
Notable highlights for various bot systems described in overview of the starcraft AI tournaments: https://www.youtube.com/watch?v=J6Q0TIPDB-Y
a potential field system tuned via trial/error (member of team is currently working at google deepmind) won, but never entered again (implied:(?) open source requirements)
an emotional stance ("mood") memory based system for tuning action behavior potential
A build order and battle simulation prediction model.
Also, hobbyist tourney live stream!: https://www.twitch.tv/sscait
There are eyetracking glasses / monitor add-ons that use infrared to detect gaze.
( https://imotions.com/tobii-eye-tracking-glasses-2/ and https://www.theverge.com/circuitbreaker/2016/8/31/12718056/acer-predator-curved-gaming-monitor-tobii-eye-tracking-ifa-2016 )
Theoretically we could create a model of a player's attention by having them play starcraft 2 while using one of those, and then creating a gradient map in conjunction with the in-game replay system they've developed to create a meta feature-layer and reverse engineer it in some fashion.
I've wanted to do this since I learned about IR cameras in wiimote.
Computational Fluidics using swarms of crabs? I bet I can do this with ants.
"Forget about boring old conventional computing stuff, the future of computer technology lies in crabs -- lots and lots of crabs. Researchers at Kobe University and the University of the West of England's Unconventional Computing Centre have discovered that properly herded crabs can signal the AND, OR and NOT arguments essential to computers, not to mention those crucial 1s and 0s."
https://www.engadget.com/2012/04/13/researchers-say-crab-based-computing-possible-lobsters-throw-up/
@ultimape I just went to check the L-Space page for Hex and found that apparently ant computing is already defictionalised: http://wiki.lspace.org/mediawiki/Hex#Annotations
@GardenOfForkingPaths Interesting that it speculates possible influence on Marco Dorigo's early works on ants. His first mention in publications seems to be from 1991, and most of his early work was on distributed systems for robotics as I came to understand it. Given the timeline it is possible that there was some cross-influences - but I suspect it was that they were pulling from similar inspiration given the timetables.
I wonder what kind of information Terry Pratchet was consuming in 1987
@ultimape are crabs systems Turing complete? also, relating to other toots, can they be used to make markets efficient? π
@charlyblack I think slimemold could take a crack at efficient markets: http://www.bath.ac.uk/announcements/microbes-are-savvy-investors-when-contributing-to-the-common-good/
They have been shown to solve the traveling salesman style problem even with just emergent behavior: https://arxiv.org/abs/1303.4969
Don't know about crabs or ants yet.
@ultimape :D the step may be short.. thanks for the paper link, all glory to the Shrinking Blob!
Code or be coded.
Program or be programmed.
https://www.youtube.com/watch?v=a2AJGz11Jzo
On fluency in programming as a form of language; all the better to express yourself.
https://www.ted.com/talks/mitch_resnick_let_s_teach_kids_to_code
Code poet.
What of AIs learning how to write their own code... ?
Whoa, a robotic vacuum with a built in SLAM navigation system for less than $400?!
https://www.amazon.com/Original-Xiaomi-Robot-Vacuum-VERSION-LDS/dp/B077D649M7/