"If AI were revolutionizing the economy, we would see it in the data. We're not seeing it. I could talk about the fact that AI companies have yet to find a killer app and that perhaps the biggest application of AI could be, like, scams, misinformation and threatening democracy. I could talk about the ungodly amount of electricity it takes to power AI and how it's raising serious concerns about its contribution to climate change."
ROSALSKY: Given that hype, should we expect AI to usher in revolutionary changes for the economy in the next decade?
ACEMOGLU: No. No, definitely not. I mean, unless you count a lot of companies overinvesting in generative AI and then regretting it as a revolutionary change.
ROSALSKY: Many AI researchers are saying we cannot end the problem of hallucinations any time soon, if not ever, with these models. That's because they don't know what's true or false.
Hallucinations are the whole way they work. They make up plausible text, if it happens to be accurate that's accidental.
@resuna @gerrymcgovern It's the failings of AI that are the most interesting aspect. Especially when we look around and see humans doing much the same things. I'm sure that 90% of the columns in The Guardian could be written by AI and nobody would notice. Maybe they already are. They're just a churning mass of memes and tropes.
I kind of hate the comparison between the routine failures of this kind of software, and humans occasional mistakes, because they really aren't all that similar.
@resuna @gerrymcgovern Occasional human mistakes? Think of the millions of MAGA followers. The point is that we don't really know what's going on in that AI simulation of an animal brain's neural network. Maybe we can never know - we'll let philosophers work on that one. But we can observe. The first one I remember reading about was an AI having a tantrum if it was asked the same question 15 times.
@stevehayes @resuna @gerrymcgovern The AI wasn't having a tantrum, it was surely just reproducing plausible answers to a repeated question based on its test data. It doesn't know what a tantrum is.
That's the difference, however much humans make mistakes - the AI isn't making mistakes, it doesn't have any concept of a mistake let alone knowledge or thinking.
@krnlg @resuna @gerrymcgovern My point is that we don't really know what's going on in there. It's not like a traditional program where we can point to lines of code. At the same time there's nothing we can point to in an animal's brain and say that's where the magic happens and that AI doesn't have and can never have that thing.
@stevehayes @resuna @gerrymcgovern I don't think that's quite right - the principle of operation of an LLM is not a mystery, it is just that it is hard to go from a specific output and work back to exactly why it gave that output. I think? I mean LLMs have been an active research area for some time, people made these things. They don't exhibit mysterious emergent intelligent properties afaik, they just seem like they do at a glance.
@krnlg @stevehayes @gerrymcgovern
Exactly, there's no magic, and in some cases we can even track down the exact training text or images that led to a particular generated result.
@resuna @krnlg @gerrymcgovern Yes indeed. Whole university courses are based on identifying which painters or whatever influenced which other painters or whatever.
@stevehayes @krnlg @gerrymcgovern
That's not the same thing, and you know it's not the same thing, I think you are just carrying on a meme that you think is funny a bit too far.
@resuna @krnlg @gerrymcgovern I'm sure there were people looked at those shuddering, juddering, noisy, smelly things and said they'll never go as fast as a good horse. Do you believe there's some holy spirit that can infuse a lump of protoplasm but not a lump of silicon? I'm not saying more and more powerful AI is something we want or should have but unless we decide to do something to stop it or unless it turns out to not be what it rather looks like being, it's on the way.
@stevehayes @resuna Honestly I'm not sure there has even been much of a recent jump in the development of any of this stuff, it has simply become a hype bubble fueled by copyright infringement. There's much more to be talked about in terms of the near term impacts of trying to centralise and gatekeep what used to be the open web than there is about the progress of AI in a meaningful sense, I think!
You don't need a thinking AI to steal everyone's work and sell it back to them.
@krnlg @resuna What I think we're seeing is that small neural networks were simulated and, as some people suspected, proved capable of doing some interesting things. That unlocked funds to build bigger networks and meanwhile technogy advanced to make them more practical. The process keeps snowballing even if, just as porn drove the adoption of video recorders, it isn't necessarily being put to good use. But it's fascinating to watch what emerges.
@stevehayes @krnlg @gerrymcgovern
Large language modules are a dead end digression that is sucking all the oxygen out of actual AI research. There is huge potential in this area and it has been hijacked by con artists.
If you want a stupid historical analogy, it's like someone was trying to build a mechanical horse and everyone was excited by the idea of regular carriages being pulled by steam horses.
But they were just painted canvas.
@resuna @krnlg @gerrymcgovern Of course neural networks are being applied to things other than LLMs too. For example image recognition. Not always successfully, for example when Teslas drive straight into fire engines with all their lights flashing. That at least is something that humans generally avoid doing.
@stevehayes @krnlg @gerrymcgovern
Don't mix up neural networks and large language models. Neural networks have a number of useful applications, image recognition being one of them.
Large language models are a tool based on neural network design that produces a parody of the source data as a plausible continuation of the prompt. This is useful for passing the Turing test and generating spam. It is not however a reasoning system or a viable path towards AI.
@resuna @krnlg @gerrymcgovern This of course is rather the point. Neural networks can be set to work on various tasks without having to be built from scratch for each one. Just as animals can evolve to handle different challenges - one to chase and eat other animals, another to avoid being eaten. Current neural networks may be some orders of magnitude smaller than the human brain but we're getting more than glimpses of what's possible. Whether it's something we want is a different question.
@stevehayes @krnlg @gerrymcgovern
Are you deliberately missing the point? This whole discussion is specifically about LLMs and not neural net software in general, and you keep trying to muddy the waters with unrelated applications like image recognition.
Large language models are not in any sense similar to humans. Any similarities you are trying to invoke are just misleading memes.