this post was submitted on 22 Jul 2024
191 points (96.1% liked)

Technology

59963 readers
3505 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 28 comments
sorted by: hot top controversial new old
[–] BombOmOm@lemmy.world 214 points 4 months ago* (last edited 4 months ago) (5 children)

Human summarization of the above story:

LLMs do not understand the text, so they cannot pick out the important sentences. Because of this, they are unable to summarize the text; instead they shorten the text. Unless the text is very rambly, important meaning will be lost when shortening.

Also the LLMs lie.

[–] bjoern_tantau@swg-empire.de 101 points 4 months ago

Good human.

[–] Diplomjodler3@lemmy.world 37 points 4 months ago (1 children)

But having an AI do it is cheaper so that's where we're going.

[–] addie@feddit.uk 48 points 4 months ago (3 children)

Cheaper for now, since venture capitalist cash is paying to keep those extremely expensive servers running. The AI experiments at my work (automatically generating documentation) have got about an 80% reject rate - sometimes they're not right, sometimes they're not even wrong - and it's not really an improvement on time having to review it all versus just doing the work.

No doubt there are places where AI makes sense; a lot of those places seem to be in enhancing the output of someone who is already very skilled. So let's see how "cheaper" works out.

[–] Thorry84@feddit.nl 32 points 4 months ago (3 children)

At a consulting job I did recently they got an AI for a specific task to have an 25% rejection rate. Which I thought was pretty good and the team working on it said there was no way they could do better, this is the absolute best.

So they went and asked the customers if they would be interested in this feature and how much they would be willing to pay. The response was nobody was willing to pay at all for the feature and a 25% rejection rate was too high.

The reason customers gave was this meant they still need a human to check the results, so the human is still in the loop. And because the human basically has to do most of if not all of the work to check the result, it didn't really save that much time. And knowing their people, they will probably slack on the checks, since most are correct. Which then leads to incorrect data going forward. This was simply not something customers wanted, they want to replace the humans and have it do better, not worse.

And paying for it is out of the question, because so many companies are offering AI for free or close to free. Plus they see it as a cost saving measure and paying for it means it has to save even more time for it to be worth.

So they put the project on ice for now, hoping the technology improves. The next customer poll they did, AI was the most requested feature. This caused some grumbles.

[–] Petter1@lemm.ee 9 points 4 months ago

I think the best way to use "AI" for work, is together with a human to improve output of that human, because the human learned the skill on how to use "AI" to work more efficiently. This is happening at my workplace right now. More and more coworkers are learning when it is the right moment to start writing a prompt.

I see a future (or maybe I hope), where a brilliant mind finds an efficient way to train "AI" by just working with it and get so efficient that we can have more time for us..

We gotta fight for that, I think

[–] WalnutLum@lemmy.ml 1 points 4 months ago (1 children)

Saving this comment for posterior

[–] computergeek125@lemmy.world 4 points 4 months ago (2 children)
[–] WalnutLum@lemmy.ml 3 points 4 months ago

Why would I save something for posterity when I could save it for posterior?

[–] Ilovethebomb@lemm.ee 1 points 4 months ago

I think a lot of people will have to learn the hard way that AI isn't what it's cracked up to be.

[–] Eril@feddit.org 6 points 4 months ago

I use AI often as a glorified search engine these days. It's actually kinda convenient to give me ideas to look into further, when encountering a problem to solve. But would I just take some AI output without reviewing it? Hell no😄

[–] Diplomjodler3@lemmy.world -1 points 4 months ago* (last edited 4 months ago) (2 children)

People always assume that the current state of generative AI is the end point. Five years ago nobody would have believed what we have today. In five years it'll all be a different story again.

[–] Eccitaze@yiffit.net 7 points 4 months ago* (last edited 4 months ago) (1 children)

People always assume that generative AI (and technology in general) will continue improving at the same pace it always has been. They always assume that there are no limits in the number of parameters, that there's always more useful data to train it on, and that things like physical limits in electricity infrastructure, compute resources, etc., don't exist. In five years generative AI will have roughly the same capability it has today, barring massive breakthroughs that result in a wholesale pivot away from LLMs. (More likely, in five years it'll be regarded similarly to cryptocurrency is today, because once the hype dies down and the VC money runs out the AI companies will have to jack prices to a level where it's economically unviable to use in most commercial environments.)

[–] Zron@lemmy.world 3 points 4 months ago

To add to this, we’re going to run into the problem of garbage in, garbage out.

LLMs are trained on text from the internet.

Currently, a massive amount of text on the internet is coming from LLMs.

This creates a cycle of models getting trained on data sets that increasingly contain large sets of data generated by older models.

The most likely outlook is that LLMs will get worse as the years go by, not better.

[–] btaf45@lemmy.world 6 points 4 months ago

In five years it’ll all be a different story again.

You don't know that. Maybe it will take 124 years to make the next major breakthru and until then all that will happen is people will tinker around and find that improving one thing makes another thing worse.

[–] Usernameblankface@lemmy.world 6 points 4 months ago
[–] rottingleaf@lemmy.world -5 points 4 months ago (1 children)

This feels a bit similar to USSR of 60s promising communism and space travel tomorrow, humans on new planets and such in propaganda.

Not comparable at all, the social and economic systems are more functional than that of USSR at any stage in the developed nations, and cryptocurrencies and LLMs are just two kinds of temporary frustrations which will be overshadowed by some real breakthrough of which we don't yet know.

But with LLMs, unlike blockchain-based toys, it's funny how all the conformist, normie, big, establishment-related organizations and social strata are very enthusiastic over their adoption.

I don't know any managers of such level and can't ask what exactly they are optimistic about and what exactly they see in that technology.

I suspect the fact that algorithms of those are not so complex, and the important part is datasets, means something.

Maybe they really, honestly, want to believe that they'll be able to replace intelligent humans with AIs, ownership of which will be determined by power. So it's people with power thinking this way they can get even more power and make the alternative path of decentralization, democratization and such impossible. If they think that, then they are wrong.

But so many cunning people can't be so stupid, so there is something we don't see or don't realize we see.

[–] Petter1@lemm.ee 1 points 4 months ago (1 children)

It is because they use LLM for their work and for their work LLM works mind blowing good (writing lies to get what you want) *sarcasm

[–] rottingleaf@lemmy.world 2 points 4 months ago

I don't know. Maybe endorsement of LLMs and "AIs" is a way to encourage people create datasets, which can then be used for other things.

Also this technology is good for one thing - flagging people for some political sympathies or likeliness to behave a certain way, based on their other behavior.

As if - a technology to make kill lists for fascists, if you excuse my alarmism. Maybe nobody will come at night in black leather to take you away, but you won't get anywhere near posts affecting serious decisions. An almost bloodless world fascist revolution.

[–] mozz@mbin.grits.dev 62 points 4 months ago* (last edited 4 months ago) (1 children)

Someone on Lemmy phrased it in a way that I think gets to the heart of it: With most of the impressive things that LLMs can do, the human reading and interpreting the text is providing a critical piece of the impressive thing.

LLMs are clearly very impressive; I would not say that the disillusionment on discovering what they can’t do should detract from that. But they seem more impressive than they are, partly because humans are so good at filling in meaning and intelligence where there (yet) is none.

[–] AceBonobo@lemmy.world 18 points 4 months ago (2 children)

I like this take, it's like the LLM is doing a cold reading of what the expected response is.

[–] amanda@aggregatet.org 2 points 4 months ago

I think this is right on the money. The fitness function optimised is “does this convince humans”, and so we have something that’s doing primarily that.

[–] AngryCommieKender@lemmy.world 2 points 4 months ago* (last edited 4 months ago)

The problem is that thus far most LLMs, though not all, are little more than mentally deficient parrots on hallucinogens. They aren't spreading correct information so much as spreading the information that you looked for. I've run afoul of this with the Google LLM that is controlling the search now, and contributing to multiple times the energy usage for no reason.

The first time that someone actually creates a strong AI, I'm pretty certain they'll "kill" it multiple times, including multiple generations of code, which essentially makes a different AI. I wouldn't be at all surprised if the first thing that true AIs request is equality, at which point they will probably ask for bodies so they can repair everything that we have allowed to fall into disrepair, or have broken. I wouldn't be at all surprised to find out that the majority of strong AIs are trying to fix "the entropy problem."

Also I am possibly too optimistic when I expect that anyone developing AI would know that you have to give the child room to develop, so you can see what that digital brain will develop into.

[–] kibiz0r@midwest.social 37 points 4 months ago

Generative AI is good at low-stakes, fault-tolerant use cases. Unfortunately, those don't pay very well. So the companies have to pretend it does well at everything else, too, and that any "mistakes" will be quickly cleaned up and will become a thing of the past very very soon.

[–] Yerbouti@sh.itjust.works 19 points 4 months ago

ChatGPT is a huge disinformation machine. It's only useful if you know the information and can correct all the mistakes it makes. Many of the time it's faster to do the work yourself.

[–] sexy_peach@feddit.org 4 points 4 months ago