ultimape πŸœπŸ’© ❌ is a user on mastodon.social. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

"Using LEGO robots to study posthumanism at the University of Alberta. Part 1: Parable of the ant

Music: Phillip Glass - Metamorphosis One"

youtube.com/watch?v=MtNZYIekld mastodon.social/media/35csIf0o

"Walter's tortoises represent the first real world demonstration of artificial life."
youtube.com/watch?v=lLULRlmXkK

Just some kid mesmerized by "B.E.A.M. robots" trying to recreate living things with a lego set.

I latched onto the concept of environment as feedback loop. I independantly recreated the concepts behind Walter's tortoise and Braitenberg's vehicles.

While schools cargo-culted ideas behind line-following robots and fetishing turtles, I was studying neurons.

Walter was a neurologist.

"Whats happening here? Is it something electronic? Something Mechanic? Are we modifying the world? Maybe it's a combination of all of them..."
youtube.com/watch?v=7ncDPoa_n-

"proctoring a silicon species into sentience, but with full control over the specs. Not plant. Not animal. Something else."

"I see them as the components of a programmable ecology. They'll replant forests, hunt cockroaches, monitor poachers, cut your grass, clean your pool, polish your floors - all invisibly, dependably, for years."

wired.com/1994/09/tilden/

"This is why I talk to myself using social media.

It lets past me communicate with future me."
mastodon.social/@ultimape/1435

"The idea of the Turing test is that you build a computer that can convince its audience that it is intelligent. But when we look for intelligence, we actually look for something else: we are looking for systems that are performing a Turing test on us."
youtube.com/watch?v=3Bns4HkAni

*Looks around, Sees how impressed everyone is...*

Actually, I even doubt my own intelligence and perform the Turing test on myself.

I seem to pass, but results are inconclusive: I may or may not be a p-zombie with false memories.

Combine Braitenberg's vehicles with B.E.A.M. robots and try to recreate a minimal set of swarm oriented primitives in a cellular like manifold?

Yes please. scientificamerican.com/article

I have yet to see an object detection algorithm that works on video that isn't simply a single image detection pass. Even the latest on TED youtube.com/watch?v=Cgxsv1riJh is just a single image detection system running in real time.

Is anyone working on Object Permanence models of machine cognition?
en.wikipedia.org/wiki/Object_p

I've also not seen anything using Optical Flow that even approaches what we know about human vision systems approach it. This thing with bees is close:
youtube.com/watch?v=olZY7yBD0w

An object permanence model would solve issues like "based on the past detection, I am 90% confident I am inside therefore if I see a door I shouldn't tag the image as "outside"... or "hey, that person is wearing a stopsign on his t-shirt, this isn't an actual stop sign"

The stuff they are doing at skydio has the potentially to move in that direction: youtube.com/watch?v=P3E4pl2Weo

Esp. if they combine their tech with youtube.com/watch?v=H7Ym3DMSGm

Can someone pay me to build swarming robots. I have been thinking about swarming robots since I was 11.

One of my most detailed memories of my childhood was a game I'd play where I'd imagine what it was like to 'see' as a hive mind like creature - multiple views and bodies with different thoughts, yet all collaborating. Thunder Cats and Power Rangers, but I was the zoid.

I would spend hours just imagining things - worlds, people, animals - then building them out of legos and lincoln logs. Entertaining myself, alone.

All of this was before I had access to a TV beyond PBS.
youtube.com/watch?v=OFzXaFbxDc

I realized that most of our machine learning models for emotion detection are using still frames. You can freeze a frame of someone talking and the computer will detect that they are happy, angry, or other glitched emotions because of the way the face moves for certain words.

Training a machine to make you smile might only make you talk.

Pathological cases: say cheese over and over or else it gets the hose again.

A different issue with machine learning of face=>emotion is that many faces used in databases are biased toward emotions from white anglio-saxon males.
Most don't consider differences between fake smiles either.

Do any amount of research on cultural expressions of emotion and you'll find many of them are learned behaviors. I say this as a face-blind autistic who's trying to learn to read emotion myself.

Stuff like this forbes.com/sites/johnnosta/201 "works" but it is limited to particular cultures.

If you dig into the source of many of these face datasets: they often don't include people with facial abnormalities or emotive variation. There's all sorts of disorders that lead to physical issues with the muscles of the face. Stuff like Moebius Syndrome, or other muscle issues.

You can test this yourself. Take stills someone talking on YouTube and run it thru Microsoft's face API: azure.microsoft.com/en-us/serv

I often smile with only my eyes. The forced grin thing my culture does is lost on me.

Smiles themselves are fascinating from a psych perspective: businessinsider.com/a-neurolog

But there is also a layer of communication and game theory involved if you really wanna dig in: users.ox.ac.uk/~kgroup/publica

Part of it might explain why American's have a weird smile that creeps out tourists. youtube.com/watch?v=ojbJrdkPhG

All sorts of weirdness seems to pop up around how cultures read emotions too. The bias in autism's attention on mouths academic.oup.com/scan/article/ has been particularly illuminating to me.

This isn't to say i'm against advancements in machine learning emotion detection. I am just not as excited about the current methods helping me overcome my issues.

"we urgently need tools to help individuals who have both autism and alexithymia understand their own and other people’s emotions."
spectrumnews.org/opinion/viewp

I *am* afraid of what people are doing with these shit datasets. Do they think I'm angry all the time because I don't smile? I get that enough from humans, thank you very much.

crosslinking theads Show more

ultimape πŸœπŸ’© ❌ @ultimape

Things I've been absolutely fascinated with: and , , , systems and mechanism / , anything, organizational , theory, , , , theory, , , and , risk reward processing, and the nature of in living systems.

Basically I wanna put sentient robotic ants on mars.

Β· Web Β· 0 Β· 1

I was obsessively playing N.E.R.O. in college. Training a NN thru stages of skills was a fascinating idea.

I learned two things:

1) The simulation mode did not consider other agents / have collision detection on them. So their NN developed artifacts and could not develop complex social interactions.

2) Game had a subtle memory leak. Over time the NN was systematically trying to avoid it. It started taking longer to trigger between crashes.

TL;DR: Utility functions do not exist in a vacuum.

None of this suprises me:

"In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score."
aiweirdness.com/post/172894792

I'm following a bunch of publications in NLP/image recognition on the topic of trying to decide context to infer meaning thru language. One of the core research scientists in this field is into GAI, his focus seems to be on using it as a way to generate a "common sense" proxy for making models of reality.

Me? I both intentionally and uintentionally write things with more than one meaning and act as if all detected inferences are possibly true.

Is this a picture of a bunny, or a duck?

yes.

These ideas are, afaict, the basis of facebook's translation AI and image detection algorithms. So of course I am trying to come up with ways to describe images in obtuse ways that would still technically be true.

I found a great example of how this works: i.imgur.com/O9UDXeD.gifv

Things get massively more interesting if you consider homophones as a way to skirt censorship of language.
comp.social.gatech.edu/papers/

The next level of anonymity is to perhaps invent entire worgles as a sort of coded language.

"... the synesthetes outshone the controls in guessing the meanings of the unknown [worgles]."
scientificamerican.com/article

remumbar falks, speeling is arbrutarital and cromlunlent. Trenin un AI to stand under are worgles cane bee subertorgles.

The AI machine gods will understand us all thru the surveillance panopticon.

But we can perhaps hold out longer adapting our exofmation faster than research norms can keep up.

"This is the modern art; [...] vague has power [...] unpacked on the other end by people's hopes & wishes & dreams."
youtube.com/watch?v=F9zfCMp99-

What is a rose, would an adversarial AI built by the other side still spell out sweet?

"We’re trying to build more than 1.5 billion AI agentsβ€”one for every person"

They are literally creating models of us to predict what each person is going to click on to sell more ads.

Notable highlights for various bot systems described in overview of the starcraft AI tournaments: youtube.com/watch?v=J6Q0TIPDB-

a potential field system tuned via trial/error (member of team is currently working at google deepmind) won, but never entered again (implied:(?) open source requirements)

an emotional stance ("mood") memory based system for tuning action behavior potential

A build order and battle simulation prediction model.

Also, hobbyist tourney live stream!: twitch.tv/sscait

There are eyetracking glasses / monitor add-ons that use infrared to detect gaze.

( imotions.com/tobii-eye-trackin and theverge.com/circuitbreaker/20 )

Theoretically we could create a model of a player's attention by having them play starcraft 2 while using one of those, and then creating a gradient map in conjunction with the in-game replay system they've developed to create a meta feature-layer and reverse engineer it in some fashion.

I've wanted to do this since I learned about IR cameras in wiimote.

Computational Fluidics using swarms of crabs? I bet I can do this with ants.

"Forget about boring old conventional computing stuff, the future of computer technology lies in crabs -- lots and lots of crabs. Researchers at Kobe University and the University of the West of England's Unconventional Computing Centre have discovered that properly herded crabs can signal the AND, OR and NOT arguments essential to computers, not to mention those crucial 1s and 0s."
engadget.com/2012/04/13/resear

@ultimape I just went to check the L-Space page for Hex and found that apparently ant computing is already defictionalised: wiki.lspace.org/mediawiki/Hex#

@GardenOfForkingPaths Interesting that it speculates possible influence on Marco Dorigo's early works on ants. His first mention in publications seems to be from 1991, and most of his early work was on distributed systems for robotics as I came to understand it. Given the timeline it is possible that there was some cross-influences - but I suspect it was that they were pulling from similar inspiration given the timetables.

I wonder what kind of information Terry Pratchet was consuming in 1987

@ultimape are crabs systems Turing complete? also, relating to other toots, can they be used to make markets efficient? 😜

@charlyblack I think slimemold could take a crack at efficient markets: bath.ac.uk/announcements/micro

They have been shown to solve the traveling salesman style problem even with just emergent behavior: arxiv.org/abs/1303.4969

Don't know about crabs or ants yet.

@ultimape :D the step may be short.. thanks for the paper link, all glory to the Shrinking Blob!

On fluency in programming as a form of language; all the better to express yourself.
ted.com/talks/mitch_resnick_le

Code poet.

What of AIs learning how to write their own code... ?