mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

348K
active users

#philAI

2 posts2 participants0 posts today

Admissions are open for the second cohort of the Bachelor's in Philosophy and AI at Umeå University! (Only for EU/EEA/Swiss citizens)

A considerable part of the programme is in English, but several courses are in Swedish, so proficiency in a Scandinavian language is required.

More information about the programme here: umu.se/utbildning/program/kand

www.umu.seKandidatprogram i filosofi och artificiell intelligens

Great analysis of Othello-GPT and the search for 'world models' in LLMs by M. Mitchell:

aiguide.substack.com/p/llms-an

>...I think it’s likely that humans don’t have the same capacity as today’s LLMs to learn enormous numbers of specific rules... it’s actually our human limitations—constraints on working memory, on processing speed, on available energy—as well as our continually changing and complex environments, that require us to form more abstract and generalizable internal models.

AI: A Guide for Thinking Humans · LLMs and World Models, Part 2By Melanie Mitchell

Nice short essay by @melaniemitchell on the latest OpenAI models (o1 and o3) and their announced performance on some parts of the ARC benchmark.

aiguide.substack.com/p/did-ope

Putting to one side all the issues with lack of transparency and thus of proper testing given the proprietary nature of these models, the key question, as she points out, is: " Is o3 ... actually solving these tasks using the kind of abstraction and reasoning the ARC benchmark was created to measure?"

AI: A Guide for Thinking Humans · Did OpenAI Just Solve Abstract Reasoning?By Melanie Mitchell

Dennett (1989) anticipating ChatGPT:

"Consider an encyclopedia. It has derived intentionality. It contains information about thousands of things in the world, but only insofar as it is a device designed and intended for our use. Suppose we "automate" our encyclopedia, putting all its data into a computer and turning its index into the basis for an elaborate question-answering system. No longer do we have to look up material in the volumes; we simply type in questions and receive answers. It might seem to naive users as if they were communicating with another person, another entity endowed with original intentionality, but we would know bet­ter. A question-answering system is still just a tool, and whatever meaning or aboutness we vest in it is just a by-product of our prac­tices in using the device to serve cur own goals. It has no goals of its own, except for the artificial and derived goal of "understanding" and "answering" our questions correctly."

This Friday at 12.15 CEST I'll be hosting a talk by computer scientist Kary Främling, in which he will present his work on Explainable AI techniques for producing explanations useful to a variety of stakeholders, rather than only to AI experts.

The talk is hybrid, so even if you are not currently enjoying the very colourful (and very wet) autumn in Umeå, you can join nonetheless!

More information here: umu.se/en/events/fraiday-socia

All welcome!

www.umu.se#frAIday: Social Explainable AI - What is it and how can we make it happen?

Great philosopher of cognitive science Nicholas Shea (also great PhD supervisor) has just published his second book, in open access, 'Concepts at the Interface'.

Link to the book: academic.oup.com/book/58062

It is about concepts, their roles in thinking, the frame problem, among much more. Highly recommended!

OUP AcademicConcepts at the InterfaceAbstract. Research on concepts has concentrated on the way people apply concepts online, when presented with a stimulus. Just as important, however, is the