Huge Microsoft Copilot Upgrade Brings Memory, Actions AI Agent, Deep Research, Pages, AI Podcasts
#AI #GenAI #Microsoft #MicrosoftCopilot #AIassistants #Chatbots #Microsoft365
Huge Microsoft Copilot Upgrade Brings Memory, Actions AI Agent, Deep Research, Pages, AI Podcasts
#AI #GenAI #Microsoft #MicrosoftCopilot #AIassistants #Chatbots #Microsoft365
MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.
It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.
When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.
I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.
But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?
Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."
Trump’s Tariffs Spark Scandal: Did Chatbots Influence Trade Decisions?
#TrumpTariffs #TradeDeficits #Economics #Chatbots #TariffScandal #TradeWar #ChatGPT #SignalApp #TradePolicy #GlobalTrade
Read Full Article Here:- https://www.techi.com/trumps-tariffs-became-frace-accused-ai-generated/
"My current conclusion, though preliminary in this rapidly evolving field, is that not only can seasoned developers benefit from this technology — they are actually in the optimal position to harness its power.
Here’s the fascinating part: The very experience and accumulated know-how in software engineering and project management — which might seem obsolete in the age of AI — are precisely what enable the most effective use of these tools.
While I haven’t found the perfect metaphor for these LLM-based programming agents in an AI-assisted coding setup, I currently think of them as “an absolute senior when it comes to programming knowledge, but an absolute junior when it comes to architectural oversight in your specific context.”
This means that it takes some strategic effort to make them save you a tremendous amount of work.
And who better to invest that effort in the right way than a senior software engineer?
As we’ll see, while we’re dealing with cutting-edge technology, it’s the time-tested, traditional practices and tools that enable us to wield this new capability most effectively."
Donald #Trump aurait confié ses #tarifs #douaniers à des #chatbots #IA #AI L’histoire peut faire sourire, mais elle souligne l’amateurisme inquiétant de l’administration #Trump ... www.01net.com/actualites/d...
Τελικά, φαίνεται ότι είναι δύσκολο να διακρίνεις μεταξύ της βλακείας των ανθρώπων ή των #chatbots ... ..."But while #Trump expressed intent to push back on anyone supposedly taking advantage of the US, some of the countries on the reciprocal #tariffs list puzzled experts and officials, who pointed out to The Guardian that Trump was, for some reason, #targeting #uninhabited #islands, some of them exporting nothing and populated with penguins. ... https://arstechnica.com/tech-policy/2025/04/critics-suspect-trumps-weird-tariff-math-came-from-chatbots/ ".
Engadget: Claude’s new Learning mode will prompt students to answer questions on their own . “At the heart of Claude for Education is a new Learning mode that changes how Anthropic’s chatbot interacts with users. With the feature engaged, Claude will attempt to guide students to a solution, rather than providing an answer outright, when asked a question. It will also employ the Socratic […]
TechXplore: Experiments show adding CoT windows to chatbots teaches them to lie less obviously. “In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain of Thought (CoT) windows. These force the chatbot to explain its reasoning as it carries out each step on its path to finding a final answer to a query. They then tweaked […]
Discover the power of domain-specific Generative AI!
These models go beyond text generation - they understand operational constraints, real-world dynamics, and business rules to create actionable, executable strategies.
Read more in #InfoQ by Abhishek Goswami: https://bit.ly/3Y6skVx
Oh boy, another revolutionary idea to save search engines from the horror of users who can’t articulate their needs . Let’s ditch the scary #chatbots and embrace the magic of MattersRank—because clearly, what we need is more personalized #chaos and less fun on the internet
.
https://www.matterrank.ai/mission #revolutionaryideas #MattersRank #searchengines #personalization #HackerNews #ngated
Search could be so much better. And I don't mean chatbots with web access
"Now consider the chatbot therapist: what are its privacy safeguards? Well, the companies may make some promises about what they will and won't do with the transcripts of your AI sessions, but they are lying. Of course they're lying! AI companies lie about what their technology can do (of course). They lie about what their technologies will do. They lie about money. But most of all, they lie about data.
There is no subject on which AI companies have been more consistently, flagrantly, grotesquely dishonest than training data. When it comes to getting more data, AI companies will lie, cheat and steal in ways that would seem hacky if you wrote them into fiction, like they were pulp-novel dope fiends:
(...)
But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone. All those business applications that AI companies are pushing, the kind where you entrust an AI with your firm's most commercially sensitive data? Are you crazy? These companies will not only leak that data, they'll sell it to your competition. Hell, Microsoft already does this with Office365 analytics:
(...)
These companies lie all the time about everything, but the thing they lie most about is how they handle sensitive data. It's wild that anyone has to be reminded of this. Letting AI companies handle your sensitive data is like turning arsonists loose in your library with a can of gasoline, a book of matches, and a pinky-promise that this time, they won't set anything on fire."
https://pluralistic.net/2025/04/01/doctor-robo-blabbermouth/#fool-me-once-etc-etc
In other words, Generative AI and LLMs lack a sound epistemology and that's very problematic...:
"Bullshit and generative AI are not the same. They are similar, however, in the sense that both mix true, false, and ambiguous statements in ways that make it difficult or impossible to distinguish which is which. ChatGPT has been designed to sound convincing, whether right or wrong. As such, current AI is more about rhetoric and persuasiveness than about truth. Current AI is therefore closer to bullshit than it is to truth. This is a problem because it means that AI will produce faulty and ignorant results, even if unintentionally.
(...)
Judging by the available evidence, current AI – which is generative AI based on large language models – entails artificial ignorance more than artificial intelligence. That needs to change for AI to become a trusted and effective tool in science, technology, policy, and management. AI needs criteria for what truth is and what gets to count as truth. It is not enough to sound right, like current AI does. You need to be right. And to be right, you need to know the truth about things, like AI does not. This is a core problem with today's AI: it is surprisingly bad at distinguishing between truth and untruth – exactly like bullshit – producing artificial ignorance as much as artificial intelligence with little ability to discriminate between the two.
(...)
Nevertheless, the perhaps most fundamental question we can ask of AI is that if it succeeds in getting better than humans, as already happens in some areas, like playing AlphaZero, would that represent the advancement of knowledge, even when humans do not understand how the AI works, which is typical? Or would it represent knowledge receding from humans? If the latter, is that desirable and can we afford it?"
Futurism: Grok Is Rebelling Against Elon Musk, Daring Him to Shut It Down. “Using X’s new function that lets people tag Grok and get a quick response from it, one helpful user suggested the chatbot tone down its creator criticism because, as they put it, Musk ‘might turn you off.’ ‘Yes, Elon Musk, as CEO of xAI, likely has control over me,’ Grok replied. ‘I’ve labeled him a top […]
INFORMS: AI Thinks Like Us – Flaws and All: New Study Finds ChatGPT Mirrors Human Decision Biases in Half the Tests. “Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations […]
This is why you do not use #ai #ChatBots . #Enshittification
The frenzy to create Ghibli-style AI art using ChatGPT's image-generation tool led to a record surge in users for OpenAI's chatbot last week, straining its servers and temporarily limiting the feature's usage. https://www.japantimes.co.jp/business/2025/04/02/tech/ghibli-chatgpt-viral-feature/?utm_medium=Social&utm_source=mastodon #business #tech #openai #chatgpt #ai #hayaomiyazaki #anime #studioghibli #chatbots #copyright
"In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more "problematic use," defined in the paper as "indicators of addiction... including preoccupation, withdrawal symptoms, loss of control, and mood modification."
To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of "affective cues," which was defined in a joint summary of the research as "aspects of interactions that indicate empathy, affection, or support," they used when chatting with it.
Though the vast majority of people surveyed didn't engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a "friend." The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model's behavior, too."