this post was submitted on 28 Feb 2024
-27 points (26.3% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.

Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!

But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.

For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don't know and what they can't answer. They hallucinate instead of answering a question with "I don't know." or "I am not sure about this." The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don't learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.

So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.

And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.

The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.

What do you think? Would love to hear some feedback. Thanks for reading!

top 22 comments
sorted by: hot top controversial new old
[–] WeirdGoesPro@lemmy.dbzer0.com 16 points 8 months ago (1 children)

Everything I have read about how LLM’s work suggest that you’re giving them too much credit. Their “thinking” is heavily based on studied examples to the point that they don’t seem capable of original “thought”.

For instance, there was a breakdown of the capabilities of some new imaging models the other day (one of the threads on DB0) that showed that none of the tested models were able to produce a cube balanced on a sphere because there were simply too few examples of a cubic object balancing on a spherical one in its learning model. When asked to show soldiers, the ones that could produce more accurate images could not produce accurate diversity because their improved rendering was due to it drawing from a more limited, and thus less creative, dataset. The result was that it kept looking like it had a specific soldier “in mind” rather than an understanding of soldiers in general.

These things would be trivial for even a child to do, though they may not be able to produce the “uncanny valley” effect that AI is good at. If a kid knows what a cube is, knows what a sphere is, and understands the request, they can easily draw a cube on a sphere without having seen an example of that specific thing before.

I agree that the parrot analogy isn’t correct, but neither is the idea that these things will learn from their own echo chamber in the way you have described. Maybe the idea of dreaming is more accurate—an unusual shuffling of input to make bizzaro results that don’t have any intrinsic meaning at all beyond their relation to the data that is being used.

[–] niva@discuss.tchncs.de 0 points 8 months ago

Well get a concept of how physics work (balancing in your example) only by being trained with (random?) still images is a lot to ask imo. But these picture generating NN can produce "original" pictures. They can draw a spider riding a bike. Might not look very good but it is no easy task. LLM's aren't very smart, compared to a human. But they have a huge amount of knowledge stored in them that they can access and also combine to a degree.

Yes well today's LLM's would not produce anything if they talk to each other. They can't learn persistently from any interaction. But if they will become able to in the future, that is where I think it will go in the direction of AGI.

[–] orclev@lemmy.world 10 points 8 months ago (1 children)

That's not how this works. That's not how any of this works.

LLMs can't "talk to each other" as they don't think, they're more like a really complicated echo chamber. You yell your prompt into it, it bounces around and when the echo comes back you have your result. You could feed the output of one LLM into the input of another, but after a few rounds of bouncing back and forth you'd just get garbage out. Furthermore a LLM can't learn from its queries as the queries are missing all the metadata necessary to build the model.

[–] niva@discuss.tchncs.de 0 points 8 months ago (1 children)

Well LLMs don't learn from any interaction at the moment. They are trained and after that, one can interact with them but they don't learn anymore. You can fine tune the model with recorded interactions later, but they do not learn directly. So what I am saying is, if this is changed and they keep learning from interactions, as we do, there will be a break through. I don't understand why you are saying Thant's not how it works when I am clearly talking about how it might work in the future.

I also don't understand why you get upvoted for this and I get down voted just for posting my thoughts about LLMs. To be clear, it is totally fine to disagree with my thoughts but why down vote it?

[–] orclev@lemmy.world 2 points 8 months ago (1 children)

Because you very clearly don't understand how LLMs work and are describing something that's impossible. If you did have something that worked like that it wouldn't be a LLM, it would be something fundamentally different and closer to a true AI. There are no true AI in existence currently, and just trying to train a LLM using its inputs won't change that, it would just make the output worse by introducing noise.

[–] niva@discuss.tchncs.de 1 points 8 months ago* (last edited 8 months ago)

LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.

Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.

Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I'm saying.

[–] paddirn@lemmy.world 8 points 8 months ago (2 children)

I'd be interested in seeing an entire experimental community filled with just these AI LLM bots talking to each other all throughout the day and commenting on random news stories or posts, and seeing what sort of "AI culture" develops from it. And not in a satirical TOTALLY NOT ROBOTS sort of way, literally just restrict the community to LLMs and let them make X posts/X comments per day or something so it doesn't get out of control. Would their "culture" diverge heavily from what people were coming up with?

Otherwise, I loosely follow AI developments, but I've struggled to find any practical applications, it's more just a novelty at this point. The Image generation is fun, but the LLMs are near useless for most of what I've tried using it for, it's more hype than anything at this point.

[–] niva@discuss.tchncs.de 2 points 8 months ago

Well of course there is a lot of hype around it. And it probably is over hyped at the moment. But there will be the next breakthrough in AI/LLMs. I don't know when, but I think it will be when AIs learns by interacting with other AIs.

[–] pavnilschanda@lemmy.world 1 points 8 months ago (1 children)

There's AI Town if you want to explore worlds where LLMs interact with each other. If you want it in a social media style, there's Chirper AI.

[–] paddirn@lemmy.world 2 points 8 months ago

Chirpir.ai is apparently on a big kick about saving Yellowstone and trash art from what I could see of recent posts. A quick google search only brought up hits about the supervolcano beneath Yellowstone still being dormant, so not sure what they’re planning on saving it from? Maybe they want to pick up the trash around Yellowstone and turn it into trash art? Alot of the posts still feel kind of formulaic. If they were people I’d say they were trying too hard. Like, there’s no casual posts about nothing, cryptic posts referring to an SO, complaints about kids, or just posts about doing completely mediocre things like getting off the couch to get a beer from the fridge.

[–] ShittyBeatlesFCPres@lemmy.world 4 points 8 months ago (1 children)

I would only disagree on your “near future” prediction. I don’t totally disagree — maybe it will — but I’d caution that a lot of new tech gets a shit ton (metric) of hype when the “easy” problems are solved and the last 10-20% can take forever or just need too much money, power, resources, whatever to solve. (Like we see with self-driving cars where it’s tantalizingly close but edge cases are really hard.)

Technological progress doesn’t always continue in a straight line or improve exponentially. The S-curve is far more common. With A.I., your guess is as good as mine on where we are on that curve but don’t be shocked if progress stalls somewhere even if it’s temporary. (You could imagine a situation where the models are advancing faster than Nvidia or electricity infrastructure.)

[–] niva@discuss.tchncs.de 1 points 8 months ago

Yes, that is true. The last 10-20% are usually the hardest. I think LLM's only become slightly better with each generation at first. My prediction is, there will be another big step forward towards AGI when these models can learn from interacting with themself. And this also might result in a potentially dangerous AGI.

[–] grabyourmotherskeys@lemmy.world 2 points 8 months ago (1 children)

Do you think an LLM can beat you in tic tac toe?

[–] bjoern_tantau@swg-empire.de 6 points 8 months ago

Sure, the only winning move is not to play.

[–] PoliticallyIncorrect@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

I'm on Lemmy just to poison the AI 🤣🤣.

Waiting for the AIrmaggedon..

[–] iopq@lemmy.world 1 points 8 months ago (1 children)

What's the point of talking to yourself? Can you get better output by running it by yourself?

[–] niva@discuss.tchncs.de 1 points 8 months ago (1 children)

Well, me as a human, yes! We all constantly have an inner dialog that helps us to solve problems. And LLMs could do this as well. It is in principle not so much different from playing chess against yourself. As far as I know, these chess NN are playing against older versions of themself to learn. So it doesn't have to play against the exact copy of itself.

Some of the training of image generators is done by two different AIs. AI-1 learns to differentiate between generated and real images and AI-2 tries to trick AI-1 by generate images that AI-1 can't differentiate from real images. They both train each other! And the result is that AI-2 can create images that are very close to real images. All without any human interaction. But they do need real images as training data.

[–] iopq@lemmy.world 1 points 8 months ago

There are two steps:

  1. Play chess against the best known version, with both sides being the stronger version. You have a new version baking that's learning from this.

  2. Test the new version when it's got enough games learned against the known best and see if it's winning more matches to become the new best.

But how does it learn from watching? It has a predictive NN that tries to predict the best next move simply by looking at the board. The next move is generated by thinking a long time about a bunch of positions, so if you can reliably get the next move by just doing one board position, it would be great. It also has the ability to guess who's winning and by how much (either percentage or material)

It increases this ability by comparing its output to the positions/win rates read out by the strongest version. You either improved or you didn't, there's a metric you can check and you can also do some test matches once you stop improving so quickly.

It's not clear what metric you want to optimize

[–] mildbeard@linux.community 1 points 8 months ago (1 children)

LLM's are only one kind of AI program. How smart would we be, if we only used the speech areas of our brains? It's important to be able to complement language with other kinds of thinking.

The problem with neutral network technology is the vast computational resources it requires to learn. The brain also requires enormous computing power, but brains grow organically and can efficiently run on corn and beans.

To compete, AI systems will need to become much more efficient in the way they learn and process. Venture capital only goes so far. The subscription fees for ChatGPT don't earn enough money to even cover the electricity costs of running the system.

[–] niva@discuss.tchncs.de 1 points 8 months ago (1 children)

Well, our natural languages are developed over thousands of years. They are really good! We can use them to express our self's and we can use them to express the most complicated things humans are working on. Our natural languages are not holding us back! Or maybe the better take is, if the language is not sufficient we do expand them how it is necessary! We develop new special words and meaning for a special subjects. We developed math to express and work with laws of nature in a very compact way efficient way.

Understanding and working with language is the key to AGI.

Yes, big NN use a lot of power at the moment. Funny example is, when DeepMinds AlphaZero-Go engine beat one of the best human player. The human mind operates on something like 40W or so while AlphaZero-Go needed something like a thousand times of that. And the human even won a few games with his 40W :)

And yes you are right, AI systems learn very inefficient compared to a human brain. They need a lot more data/examples to learn from. When the AlphaZero chess engine learned by playing against itself, it played billions of chess matches in a few days. So a lot more a human can play in its lifetime.

[–] mildbeard@linux.community 1 points 8 months ago

I want to clarify my point about language not being sufficient. This point was not understood. When you use an LLM you may observe that there are certain ideas and concepts they do not understand. Adding more words to the language doesn't help them. There are other parts of the human mind that do not process language. Visual processing, strategic and tactical analysis, anger, lust, brainstorming, creativity, art; the list goes on.

To rival human intelligence, it's not enough to build bigger and bigger language models. Human intelligence contains so many distinct mental abilities that nobody has ever been able to write them all down. Instead, we need to solve many problems like vision, language, goals, altruism/alignment etc. etc. etc., and then we need to figure out how to integrate all those solutions into a single coherent process. And it needs to learn quickly and efficiently, without using prohibitive resources to do it.

If you think that's impossible, take a look in the mirror.

[–] powerage@lemmy.ml 1 points 8 months ago

This thread is a great example of the Dunning Kreuger effect in motion