this post was submitted on 09 Sep 2025
506 points (98.5% liked)

Technology

74957 readers
2641 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.

top 50 comments
sorted by: hot top controversial new old
[–] Lucidlethargy@sh.itjust.works 24 points 10 hours ago (1 children)

Because they are FUCKING TRASH.

[–] Dremor@lemmy.world 1 points 5 hours ago

Not for all use cases, but for most it is.

[–] rumba@lemmy.zip 5 points 9 hours ago (1 children)

It'll right itself when the CEOs stop investing in it and force it on their own companies.

When they're not getting their returns, they'll sell their stocks and stop paying for it.

It'll eventually go back from slop generation to correction and light editing tools when venture stops paying for the hardware to run tokens and they have to pay to replace the cards. .

[–] Tollana1234567@lemmy.today 2 points 6 hours ago

and they will drop it altogether.

[–] ProgrammingSocks@pawb.social 5 points 9 hours ago

Because they suck.

[–] Treetrimmer@sh.itjust.works 3 points 8 hours ago (1 children)

That's unfortunate because I want an excuse not to be a corporate slave

[–] squaresinger@lemmy.world 2 points 53 minutes ago

Tbh, better a corporate slave than a startup slave.

[–] salacious_coaster@infosec.pub 3 points 11 hours ago

Why is the Census Bureau tracking LLM adoption?

[–] jubilationtcornpone@sh.itjust.works 51 points 23 hours ago (3 children)

Personal Anecdote

Last week I used the AI coding assistant within JetBrains DataGrip to build a fairly complex PostgreSQL function.

It put together a very well organized, easily readable function, complete with explanatory comments, that failed to execute because it was absolutely littered with errors.

I don't think it saved me any time but it did help remove my brain block by reorganizing my logic and forcing me to think through it from a different perspective. Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.

[–] August27th@lemmy.ca 44 points 23 hours ago

Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.

Hey, look at the bright side, as long as you were chained to your desk instead, that's all that matters.

[–] Cethin@lemmy.zip 17 points 22 hours ago (1 children)

At one point I tried to use a local model to generate something for me. It was full of errors, but after some searching online to look for a library or existing examples I found a github repo that was almost an exact copy of what it generated. The comments were the same, and the code was mostly the same, except this version wasn't fucked up.

It turns out text prediction isn't that great at understanding the logic of code. It's only good at copying existing code, but it doesn't understand why it works, so the predictive model fucks things up when it takes the less likely result. Maybe if you turn the temperature to only give the highest prediction it wouldn't be horrible, but you might as well just search online and copy the code that it's going to generate anyway.

[–] 7toed@midwest.social 2 points 40 minutes ago

But.. how else do we sell our tool as a super intelligent sentient do-it-all?

[–] UncleMagpie@lemmy.world 12 points 22 hours ago (6 children)

The bigger problem is that your skills are weakened a bit every time you use an assistant to write code.

[–] KneeTitts@lemmy.world 5 points 20 hours ago

The bigger problem is that your skills are weakened a bit every time you use an assistant to write code

Not when you factor in that you are now doing code review for it and fixing all its mistakes..

load more comments (5 replies)
[–] eronth@lemmy.dbzer0.com 16 points 21 hours ago (3 children)

Kind of a weird title. Of course adoption would slow? The people who want it have adopted it, the people who don't haven't.

[–] UnderpantsWeevil@lemmy.world 6 points 17 hours ago

Marx tapping the big sign marked "Tendency of the rate of profit is to fall", but then looking at the already unprofitable AI spin-offs and just throwing his hands up in disgust.

I think there's an argument to be made that the AI hype got a bunch of early adopters, but failed to entice more traditional mainstream clients. But the idea that we just ran out of new AI users in... barely two years? No. Nobody is really paying for this shit in a meaningful way. Not at the Enterprise Application scale of subscriptions. That's why Microsoft is consistently losing money (on the scale of billions) on its OpenAI investment.

If people were adopting AI like they'd adopted the latest Windows OS, these firms would be seeing a steady growth in the pool of users that would signal profitability soon (if not already). But the estimates they're throwing out - one billion AI adoptions in barely a year - are entirely predicated on how many people just kinda popped in, looked at the web interface, and lost interest.

[–] KneeTitts@lemmy.world 10 points 20 hours ago

We were initially excited by AI at my company, but after we used it a bit we didnt find any really meaningful use cases for it in our business model. And in most cases we spent a lot of time correcting its many errors which would actually slow down our processes...

[–] _haha_oh_wow_@sh.itjust.works 7 points 21 hours ago* (last edited 21 hours ago)

It would also slow if companies were told insane lies about the capability of "AI" ("it's living having a team of PHD level experts at your disposal!") and then companies realized that many of these promises were total bullshit.

[–] Pat_Riot@lemmy.today 20 points 23 hours ago (1 children)

They dressed up a parrot and called it the golden goose and now they're chasing a wild goose.

[–] MycelialMass@lemmy.world 2 points 16 hours ago (1 children)
[–] Tollana1234567@lemmy.today 1 points 6 hours ago

An undomesticated Psittaciformes.

[–] sj_zero@lotide.fbxl.net 98 points 1 day ago (5 children)

IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.

Reminds me of 10 other technologies that if you didn't get in the world was going to end but ended up more niche than you'd expect.

[–] paequ2@lemmy.today 2 points 7 hours ago (1 children)

AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.

I'm so sick of "AI demos" at work. Every demo goes like this.

  1. Generate text with an LLM.
  2. Don't fact check it.
  3. Don't verify it works.
  4. Oooh and aahhh at random numbers and charts.
  5. Higher ups all clap and say we could be 10x more productive if more people would just use AI more.

Meanwhile they ignore that zero AI projects have actually stuck around or get used in a meaningful way.

[–] setsubyou@lemmy.world 1 points 7 hours ago

As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…

To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.

[–] MagicShel@lemmy.zip 43 points 1 day ago (2 children)

As someone who is excited about AI and thinks it's pretty neat, I agree we've needed a level-set around the expectations. Vibe coding isn't a thing. Replacing skilled humans isn't a thing. It's a niche technology that never should've been sold as making everything you do with it better.

We've got far too many companies who think adoption of AI is a key differentiator. It's not. The key differentiator is almost always the people, though that's not as sexy as cutting edge technology.

[–] floofloof@lemmy.ca 7 points 22 hours ago* (last edited 22 hours ago)

The key differentiator is almost always the people, though that’s not as sexy as cutting edge technology.

Evidently you haven't worked with me. I'm actually quite sexy.

load more comments (1 replies)
load more comments (3 replies)
[–] kazerniel@lemmy.world 9 points 23 hours ago (3 children)

Fucking finally. Maybe the hype wave has crested 🤞

load more comments (3 replies)
[–] RedGreenBlue@lemmy.zip 19 points 1 day ago (4 children)

For the things AI is good at, like reading documentation, one should just get a local model and be done.

I think pouring as much money as big companies in the us has been doing is unwise. But when you have deep pockets, i guess you can afford to gamble.

load more comments (4 replies)
[–] jaykrown@lemmy.world 15 points 1 day ago (1 children)

It is absolutely a bubble, but the applications that AI can be used for still remain while the models continue to get better and cheaper. Here's the actual graph:

[–] r0ertel@lemmy.world 6 points 22 hours ago (2 children)

This contradicts what I'm reading in that AI model costs grow with each generation, not shrink.

[–] jaykrown@lemmy.world 1 points 12 hours ago

Also that is the cost to train them, not the cost to use them, which is different.

[–] jaykrown@lemmy.world 1 points 12 hours ago

That was published a year ago, highly selective, doesn't include something like Llama 4 Maverick.

[–] umbrella@lemmy.ml 27 points 1 day ago

brace for the pop, this one gonna be loud.

[–] mechoman444@lemmy.world 6 points 23 hours ago (1 children)

Of course. Although ai, or more accurately llms do have use functions they are not the star trek computer.

I use chatgpt as a Grammer check all the time. It's great for stuff like that. But it's definitely not a end all be all solution to productivity.

I think corporations got excited llms could replace human labor... But it can't.

[–] Typhoon@lemmy.ca 18 points 22 hours ago (3 children)

Grammer

Grammar.

There's nothing AI can do that an internet pedant can't.

[–] floofloof@lemmy.ca 5 points 22 hours ago

grammar

Mind your capitalization, fellow pedant.

load more comments (2 replies)
[–] RandAlThor@lemmy.ca 5 points 22 hours ago (2 children)

Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August.

Someone explain to me how I am to see this "rate" as - is it adoption rate or usage rate? IF it is adoption rate 13.5% of all large firms are using it? and it's declined to 12%? Or is it some sort of usage rate and if so, whatever the fuck is 12% usage?

load more comments (2 replies)
load more comments
view more: next ›