mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

360K
active users

I really like the approach of @themarkup to cover actual examples of immediate harms of Generative ML. Some of the examples mentioned

- Trump hugging Fauci (Generated)
- Biden future illustration (Generated)
- Biden robocalls to ask voters not to vote (Generated)
- Trump claiming a bad picture was generated (actual picture)
- Tailor Swift (endorsement in response to Generated image)

2/🧵

I do disagree with @sisiwei that generated images of Trump hugging black voters didn't matter because it was diffused as debunking - any propaganda image diffusion is beneficial, especially for more brazen lies, because a lot of people will see it and not have enough attention to see that it is a debunk.

Similarly, I disagree with the MIT's take that AI impact on election is being overblown. The absence of detection is not proof of absence, and critical voters are likely tightly targeted.

3/🧵

Similarly, TikTok absence of political content is not due to the absence of political content, but because it is being wrapped into values that are being promoted rather than explicit political ads.

Similarly, ads that people realize are targeted at them are bad ads.

4/🧵

Second talk about from by Anushka Jain of the Digital Futures Lab, India, on the AI-based disinformation during Indian elections.

Despite major fears of the AI-driven disinformation, and presence of AI disinformation did not disrupt the democratic process in the end.

However, in absence of regulations, AI companies lack consciousness and generative AI use for disparaging and non-consensual images.

5/🧵

Next, Chine Labbé, of NewsGuard.

Starting off with a really interesting case of a single person orchestrating >100 newspapers posing as local ones thanks to LLM text generators, thanks to news rewriting.

The loop of deepfake planted "evicence -> generated articles -> ai generated images -> chatbot uptaking the information, driving the narrative.

Which I think is really relevant, and will gain in power when switching from falsehood-based disinformation to truth-based one.

6/🧵

Interesting point as well about re-use by Russia of the old-and-tried "foreign experts" from "international publications". Kinda reminds me of the 90s post-USSR "news" segments with "foreign" experts speaking in broken English, dubbed over in Russian to lend credibility to the internal propaganda.

It's an interesting play on the shade of what targets know and don't know or able to validate.

7/🧵

Also, another really good point about the fact that even if AI-generated disinformation is not necessarily critical now, they are eroding the general trust in real statements, creating the post-truth society.

Which reminds me of Anna Harendt's "ideal subject of totalitarian rule" being "the people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist."

8/🧵

Also, according to her the "search" LLM Chatbots are easily vectored, both because of the repeated narrative and reputable news outlet closing their articles to LLMs.

9/🧵

Next, Jan Zilinsky of TU Munich, talking about his research on the automation paradox with respects to AI.

I disagree with the evaluation that Google didn't make people stupid. Without Wikipedia there would be no verifiable track record of information or ability to recompose it. This is one of the reason the LLM harms or the datacenter power consumption remained out of the public eye.

10/🧵

In general, I don't think that discussing concerns about technologies from the past as laughable is constructive.

There is a lot of context that we are missing today that was generally understood at the time, with Luddites being a poster child for that.

I generally don't think that reducing those concerns to the pro-technology or anti-technology general stance is once again - reductive.

11/🧵

Next, Hubert Brossard of Ipsos, talking about opinions of people on the AI based on surveys.

12/🧵

As a conclusion of the morning, a panel on the AI, Disinformation, and Political Implications: Reality vs Hype by Lukas Mäder, NZZ, Jan Zilinsky, TU Munich, and Albertina Piterbag of UNESCO France.

I still don't think that the term of "fake news" should be used, both for Lakoff's reasons and due to the poor definition of the term.

13/🧵

While the conversation about moderators in countries that are not prioritized by social media platforms is interesting, I think that not mentioning that moderators are often in cahoots with the local authorities and are acting not as much as moderators as censors for the news.

Although Looks like Albertina is bringing the censorship back into the framework.

Interesting perspective from Albertina about using automated algorithmic tools as a complement to existing tools, such as in Taiwan.

14/🧵

<back from my own presentation>.

Next on the stage - another panel on policy and platform accountability, with
- Marjorie Buchser, UNESCO
- Ashutosh Chadha, Microsoft
- Prof. Touradj Ebrahimi, EPFL
- Lindsay Hundley, Meta
- Aurora Morales, Google

So far, Meta representative puts forwards their disruption of RU/CN/IR, as well as US liberals impersonating conservatives, while Google - their public reports and Safety team, including in rare languages.

15/🧵

Microsoft representative is arguing that we are not going fast enough on AI, given the great application in medicine, drug discovery, weather forecasting, ... Suggest that regulations of misleading synthetic media is required and that the provenance should be traced back, as well as the tools need to be created to identify synthetic media correctly.

16/🧵

UNESCO representative suggests that the regulation used to request no intervention from government against technologies. However Cambridge analytica scandal, disinformation and GenAI led to a realization that there is a lack of governance. However, regulation is very hard, due to extreme fragmentation of regulations, but also the balance between the disinformation suppression and maintenance of free speech.

17/🧵

Andrei Kucharavy

I also like the point about the discussion about the fact that the solution to disinformation is certainly sociotechnical, not just technical.

Prof. Ebrahimi, on the international collaboration standard bodies that he is part of: they are needed to address global issues, that cannot be addressed locally. Eg. climate change, or, most recently, Gen ML. Hence the World Standards Collaboration entity, to identify solutions and gaps, raise awareness, ...

18/🧵

Final panel on the building Resilience - role of Media, Academia, and Civil Society in Fighting Disinformation.

Panelists: @podehaye, @sisiwei, @kkomaitis, and Emma Hoes of the University of Zurich.

19/🧵

A really good opening point from @podehaye that we need to take in account the development of the technology itself and the business models.

A question from the moderation - Katherine Loh of C4DT: what @themarkup did to combat disinformation. @sisiwei mentions the study of culturally Vietnamese households. They traced the disinfo down to a single person who was translating Breitbart and Co. into Vitnamese, targeting people who wanted news about US, but in Vietnamese. Countered by 90+y grandma.

Happy to hear Emma’s Hoes’ point that overcommunicating about disinformation leads people to distrust as well the reliable information.

Similarly, @podehaye might not be fully aware of it, but Cambridge Analytica killed Facebook’s engineering recruitment pipeline and indirectly led to HuggingFace’s success.

22/🧵

Also the point of @kkomaitis about centralization of AI is something that could be potentially countered by the distributed, Byzantine resilient, privacy preserving, well-generalizing learning. Which is for now an open question if not a holy graal.

23/🧵

@sisiwei mentions the tracking research of @themarkup - notably the “black light” tool to see how people are being tracked, eg. on their obgyn website. Wonder if it still works with chromium manifest v3.

Also wonder if the citizen science mentioned by Sisi and Paul-Olivier is creating a vulnerability to be exploited by the next Cambridge Analytica (thinking to myself back in 2015-2016).

24/🧵

Conversation about mastodon following a question: @kkomaitis suggests that Mastodon is not sufficiently user-friendly & lacking network effect. I disagree - Twitter in 2008 was **WILD** and not all connections on social media are equally valuable.

@podehaye maked a good point that Mastodon solved the problem of content federation, but not moderation (hugops to mastodon moderators).

25/🧵