Follow

"Bad news," the ship says. "A rock hit one of my solar arrays. It broke off. We are now using more energy than we bring in."
That explains that loud crunch.
"Will we make it back to Earth?"
The ship is silent a long time.
"One of us could."
"Wait, what?"
The ship is silent.

@MicroSFF Prolly just serendipity... or maybe you were inspired by this article from last week I only saw this morning?
John Feffer: Avoiding the Robot Apocalypse
tomdispatch.com/artificial-int

@MicroSFF I think this might very well be the first microfic of yours I've read that paints a slightly more negative view of AI, not as just our friend but as an (in some senses of the word) living thing with self-preservation.
Of course, the twist here could be that there's actually a life pod on the ship and the "one" is the human. But with that not being specified, it seems to be the less-likely option.

@Mayana @MicroSFF I read it as "AI just self-deleted to save enough energy for human buddy"

@seachaint Wouldn't life support take up much more energy than the running of a program? I suppose the silence from the ship could imply that, yes, but ... :ms_shrug:
I didn't read it as the AI threatening to kill the human, or even necessarily doing so. More as ... pointing out the hard truth, I suppose.
@MicroSFF

@Mayana @MicroSFF is that how you read it? I read the ship going quiet as shutting itself down to preserve power. Sad story. Heroic sacrifice.

@dhfir And you're the second one to do that, so perhaps it's just me that has an overly dark imagination. Not the first time that has happened with these stories.
In my defense, I didn't read it as the ship being evil. Just ... calculating how much energy each thing would require, probably realizing life support needs the most, and pointing that out. Probably, judging by the silence afterwards, even feeling guilty about it -- but wanting to live, so at least someone can.
It seems we'll need a sequel from @MicroSFF to resolve this dilemma. But while it might be considered more wrong by some, if it is the human that has to die -- if they *agree* to save their friend -- wouldn't that be just as heroic?

@Mayana @MicroSFF maybe?
But like, as an autist, I practically grew up with a connection to sci-fi ai.
Many have commented that heroic ai often resemble autistic/agender/aro/ace individuals. Some see this as problematic, and I suppose there is merit to that.
Me, I never interacted with popular culture much, but I'm rather predisposed to self-insert myself into ai characters.
And if I was the ship, the human sacrificing themselves to save me, quite simply, would not be an option. I appreciate the sentiment, but I'm afraid I cant let you do that, dave.
Y'know?

@dhfir That makes sense. If you identify as one of the character and are of the heroic bent, of course the opposite scenario would seem wrong.
I love reading about AI in science fiction, too. But I especially love depictions like in the games The Fall or Code 7 (both of which you likely haven't played, admittedly), where the situation is a little more complicated. Where the AI isn't our friendly buddy who would never, ever harm us, but not murdering everyone because that would be the most effective solution or for the evulz, either.
Actually, HAL is a pretty good example, if you think a bit deeper about why he acted in the ways he did. Trouble with conflicting orders. Wish for survival.
@MicroSFF
1/2

@dhfir Your point about AI often standing in for ace/agender/autistic/etc. people is a great one. Not something I have considered, but it could be part of why the idea of "It must be the AI sacrificing themself, not the human!" rubs me the wrong way. If sapient AI ever arrive, then even if they turn out to be good and not the end of humanity, they will be superior to us in most ways, if not all. So constantly treating them as more disposable feels ... wait a minute.
If the ship AI is using up that much pwer, they just need to turn themself off, not delete. A couple extra inactive files on the SSD won't use up any extra power. This doesn't even *have* to be a sacrifice.
Ah.
@MicroSFF
2/2

@Mayana @MicroSFF a fair point, at least if you assume the AI works on silicon computing as we know it.
Tho, if it's a quantum computer, my understanding is those need cooling, even more than silicon ones, as they currently simply will not work outside of VERY cold temperatures.
If whatever it ends up using for storage(this IS an entire person we're talking about here) follows similar principles, this suddenly makes more sense.

@dhfir Hmm, true. But the computer would still have to run, since that's how piloting will be done in the future, I imagine. So ... wait, is the AI even *allowed* to turn themself off/delete themself? Would the human be able to get home without them? There probably is no separate AI for autopiloting, so we can only assume doing it manually is still an option.
I am not well-educated on how spaceflight works today, but I assume even now a lot of the work is not done by the astronauts themselves. Supposing it is the AI that made the sacrifice, let's hope the human's skills aren't rusty ...
@MicroSFF
Edit: Sorry, fixing typos.

@Mayana
ssd's are bad at being unpowered and storing data for a long time, afaik you're lucky if the data is readable after being unpowered for more than a few months.
better store all the data on a hdd (or engrave it into nixkel plates) if the way home is long ;)

rosettaproject.org/blog/02008/

@dhfir @MicroSFF

@glowl Wait, how did you know what my backup strategy is? I feel attacked!
Jokes aside, it did take me a moment to realize what you were talking about. Thank you for sharing that information! I'll just ... hope nothing goes wrong over here ...

Sign in to participate in the conversation
Mastodon

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!