mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

380K
active users

Rod Hilton

revealed the two weirdest features as a pair.

1. Give a short summary and Google will draft an e-mail for you based on it. You can even click "elaborate" and it will make the e-mail longer.

2. When opening an e-mail, Gmail can summarize the entire thing for you so you don't have to read all of it.

Does everyone realize how fucking bizarre this is?

Both people in the conversation want to work with directness and brevity, and Google is doing textual steganography in the middle.

@rodhilton@mastodon.social It's made for executives to have their EA's write everything.

@rodhilton I know people who want these features and I hate them.

@rodhilton Wasn't there some kind of cartoon making that point earlier this year?

@raphael_fl @rodhilton What's the difference between satire and reality? About six months.

@rodhilton@mastodon.social They've made features that just add and remove their own bullshit? That's kind of amazing that people can spend their days making stuff that just cancels out the other stuff they're making like that.

@kichae @rodhilton If google is smart, they just safe the old prompt and just give that back to the second person ;D

@shadowwwind @kichae @rodhilton Nah, that requires storage space. Calculation is cheaper.

@kichae @rodhilton Definition: Balanced AI — the amount of bullshit produced by the generating AI is equal to the amount of bullshit destroyed by the consuming AI. You can now hand me my #IEEE Hamming medal.

@kichae @rodhilton They've made a feature that adds bullshit, and they've made another feature that replaces it with shorter bullshit. LLMs can only add BS, not remove it.

@rodhilton Plot twist: the summary of the enhanced email is the original prompt used to create it.

@Dave3307 @rodhilton ... and at some point the intermediate wont be parsable by humans anymore because thats not relevant for the success anymore.

@Dave3307 @rodhilton if that were guaranteed these features would only waste cpu cycles, storage, and bandwidth.
Alas, far from it.

@Dave3307 @rodhilton exactly. They don’t need “elaborate”. They need “make this half-brained slapdash cavespeak less annoying “.

@rodhilton This is just the opposite of a compression algorithm

@rodhilton @Gte the flood of communication failures this is going to unleash is mind-boggling.

@bitmaker @rodhilton @Gte a non-zero number of companies will fail due to something like this messing up in a big way

@leoncowle @rodhilton

The bullet point appears to say the same as the original unless you inspect it really carefully.

@leoncowle @rodhilton that's how books industry already works

@leoncowle @rodhilton and here I thought you were supposed to compress data before sending it down a link, not inflate it

@leoncowle @rodhilton AI is going to understand our emotions very well.

@leoncowle @rodhilton Was also going to post this.

The biggest change in the world is the speed in which reality imitates art.

@rodhilton @Gte well let me tell you, people don’t actually want directness, they think it’s rude.

Google is permitting them to carry on this charade, which is the real sin.

@jason @rodhilton @Gte It's really quite obnoxious that weird tendency to want indirectness at all cost.

@rodhilton Google having a conversation with GMail

@rodhilton @Gte

It’s a beautiful encapsulation of Silicon Valley in general.

“Should we address this minor issue of social norms in the workplace?

I have a solution that’ll only require 2 globally distributed SRE teams, 20 FTE SWEs and 1/3rd of our orgs annual compute capacity!”

@rodhilton In similar news, Google have a feature where your phone can answer your calls for you, and Duplex which could phone someone up on your behalf. They wrote two AIs that could converse *in spoken English* complete with ums and ahs.

@andrewt Here's another one.

Introducing tools to allow AI-based generation of images, as well as tools to detect AI-generated images

This is the very definition of selling both the poison and the cure.

techcrunch.com/2023/05/10/goog

consent.yahoo.comTechCrunch is part of the Yahoo family of brands

@rodhilton People have been doing this before Google. First automate the process, then maybe average people will see how absurd this is and learn to be more direct. And if they don’t, at least now it wastes less time :)

@rodhilton This is why I just send one line emails to begin with

@rodhilton @Gte I'm more convinced than ever that once humanity's time on this planet is over the only thing left will be a single super powerful AI that is optimized to navigate complex phone trees, and an even more powerful AI that creates more and more elaborate phone trees.

@garyowen@hachyderm.io @rodhilton@mastodon.social @Gte@mastodon.social

Paperclips. It's all about the paperclips.

https://www.decisionproblem.com/paperclips/index2.html

Play the web version of the
#game you'll see why this is relevant to the thread

@rodhilton THANK YOU I feel like I've been the only person thinking this is bonkers.

@rodhilton Proposal: use these two, but the other way round, on normal emails as a form of lossless data compression.

@stecks @rodhilton I seriously doubt it’d be anywhere close to lossless. It’d be fascinating to see how many times you could go back and forth before it drifts away from whatever the point was

@ttyRazor @stecks @rodhilton
We could test this now using two different LLMs. Compressing-expanding song lyrics should yield some interesting creations

@ttyRazor @stecks @rodhilton

Trouble with trying this with BingChat is that it recognizes from which song the lyrics originated, and regurgitates the memorized summary for the entire song.

Yet, asking BingChat to write a song using its own summary as the prompt yields something that is altogether very different - at least it won't attract a copyright lawsuit from Ed Sheeran

@ttyRazor @stecks @rodhilton

Tried this with the abstract to "An aperiodic monotile" by David Smith et al.
arxiv.org/abs/2303.10798

Bing Chat's one sentence summary:
"a solution to a longstanding problem of finding an aperiodic monotile or “Einstein” by exhibiting a continuum of combinatorially equivalent aperiodic polygons."

Then asked for an informational paragraph with that as prompt:

@rodhilton A decoder ring...but there's no code and you have to wear it anyway.🤔

@rodhilton @designatednerd This is starting to feel like encryption without keys. Convert a sentence to a PHD thesis and then back. 😅

@rodhilton The wonderful thing is: once 90% of emails are preprocessed by AI and not by humans, there is not control anymore. Thus the "extract most important point" feature will become independent from what human readers would do.

@rodhilton Google will literally make money from you coming and going.

@rodhilton
not only that, it's lossy steganography.

@rodhilton just starting the countdown until we get the first employment discrimination settlement that includes a generated email.

The writer, rightfully claiming that they did not in fact, type the words. The reader, displaying the AI shortened message, based on some admin configuration.

@gatesvp yeah I think that the whole AI hype cycle in general is going to really brush up against issues of responsibility in a hurry.

Your AI-powered self-driving car hits someone. Are you responsible? You just enabled a feature of the car. Is the AI responsible? It's an algorithm. Is the company who built it? No, they weren't there.

If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?

Maybe there'll be responsibility issues, but I don't think those are very good examples.

So with the car, the law regarding malpractice and selling defective products is pretty well established and I see no reason why it wouldn't continue to work as expected. That is, the manufacturer would be held responsible for selling a defective product. They would be responsible to source reliable components, and I don't see a reason why a software component would be any different from a hardware component in that regard.

With AI based firings, we've already got pretty clear legal precedents for firing people based on performance metrics. How we apply those metrics isn't very important, so long as a protected characteristic isn't one of those metrics.

Hell, AI based HR will probably reduce liability because it'd be easier to objectively prove that the input data didn't include any protected characteristics.

@Marvin @rodhilton

I think these are really good examples and that you might not be into the details enough.

For example, a self-driving car will not be involved in zero accidents. Zero is not the metric of success for mechanical things. Or even for software.

And we don't typically hold companies liable for things that are operating within regulatory specification.

The HR stuff is even messier. Because it actually highlights giant regulatory problem... /1

I wasn't saying zero crashes. Just that a legal standard exists and it's flexible enough to accommodate AI.

@Marvin @rodhilton

Legal standards are invented by humans like you and I. If you were tasked with creating such a standard, what would you want see in it?

How would you write this to be "inclusive of AI"?

The legal standard about shipping defective products is exactly what I'd create.