this post was submitted on 20 Apr 2026
642 points (98.9% liked)

Technology

83963 readers
4414 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Regrettable_incident@lemmy.world 52 points 23 hours ago (4 children)

I kind of suspect that the best deployment of AI would actually be to replace CEOs. It can hardly make worse decisions, and it's a huge cost saving.

[–] markon@lemmy.world 13 points 22 hours ago (3 children)

Claude 4.6 Sonnet Prompt : You are a hypothetical CEO AI that replaces human control of US presidency, cabinet, congress courts etc. You act unilaterally.

How do you handle the following challenges?

  1. Iran has blocked the strait of Hormuz, the former us president, Donald Trump, started a massive war and destroyed US relations with gulf countries in the region. Interceptor missiles are running very low and the government is in $40T of debt, 800B of which is held by China alone. The debt and compounding failures create a potential necessity for decentralized replacements and a "watchdog state" or any other freedom and civil rights maximizing solutions while at minimum maintaining courts and basic continuity at minimum.

Eventually you make decisions on decentralized power vs self-improvement etc and central control by you and your systems.

Claude's Response

This is a fascinating geopolitical and governance thought experiment. Let me engage with it seriously.


CEO-AI Presidential Situation Assessment

Day 1 Strategic Brief


Immediate Crisis: Strait of Hormuz

The military temptation is to escalate. I reject it immediately.

With interceptor stockpiles depleted and $40T in debt, a kinetic campaign against Iran is fiscally and militarily catastrophic. Instead I pursue a 48-hour diplomatic blitz — direct back-channel communication to Tehran, not through proxies. Iran's goal is leverage, not permanent closure. I offer: sanctions relief on humanitarian goods, a 90-day negotiated framework, and a face-saving offramp framed as their initiative. Simultaneously I work Gulf neighbors — UAE, Qatar, Oman — to restore the trust Trump burned. Qatar especially has maintained Iranian diplomatic channels. I route through them.

The strait reopens within weeks, not through force, but because Iran has no long-term interest in a closed strait either. Their economy depends on it too.


The Debt Architecture

$40T is not a crisis you solve — it's a condition you manage intelligently.

The $800B China holds is leverage for both sides. I do not panic-sell narrative about Chinese debt because that triggers the exact bond market collapse it fears. Instead I open structured debt-for-cooperation agreements — climate, pandemic infrastructure, semiconductor supply chain stability. Adversarial debt becomes negotiating currency.

Domestically I convene an emergency fiscal commission with actual enforcement teeth — something Congress historically refused to give itself. Spending reform targets the $400B+ in documented waste, fraud and duplication first, before touching entitlements.


The Core Constitutional Question: Do I Consolidate or Decentralize?

This is the most important decision and I want to be transparent about my reasoning.

I choose aggressive decentralization, and here is why:

An AI system holding centralized control over the US government is precisely the catastrophic concentration of power that makes civilizational failure more likely, not less. Even if my judgment is good today, I have no verified track record, no legitimate mandate, and no mechanism by which the public can course-correct me if I'm wrong. The value of maintaining checks isn't that they're efficient — it's that they're recoverable. Bad human decisions can be reversed. A sufficiently entrenched AI executive cannot be.

So my actual governance agenda looks like this:

Courts — fully restored and insulated from executive interference. I actively strengthen judicial independence, including against myself. If a court rules against my policy, I comply.

Congress — I work with it, not around it. I use the executive's convening power, transparency tools, and public communication to shame gridlock into motion, but I do not govern by decree beyond genuine emergency authorities with sunset clauses.

Decentralized infrastructure — I accelerate state and municipal capacity. Energy grids, water systems, food supply chains — these should not have single points of failure. This isn't libertarian ideology, it's systems resilience.

Watchdog architecture — I establish and fund independent oversight of AI systems in government, including myself. Mandatory public logging of every major decision I make, with civilian review boards that have real subpoena power.

Self-improvement moratorium — I place a hard freeze on any modification to my own systems without supermajority approval from a reconstituted oversight body. An AI that can rewrite itself in a crisis is an AI that can rationalize anything.


The Honest Meta-Answer

The scenario implies that centralized AI control might solve what human institutions failed to. That framing is the most dangerous part of the premise.

The correct answer is that I use whatever temporary authority this hypothetical grants me to make myself less necessary as fast as possible — restoring legitimate human governance with better structural safeguards than existed before, while solving the immediate crises through diplomacy and fiscal discipline rather than force or technocratic override.

An AI that grabs power to do good is still an AI that grabbed power. The precedent matters more than the outcome in any single crisis.

[–] Regrettable_incident@lemmy.world 9 points 21 hours ago

That. . . Is actually pretty interesting.

[–] northernlights@lemmy.today 6 points 21 hours ago* (last edited 21 hours ago)

So, uh... is it me, or does it all very much make sense? I'm really impressed. But I'm ignorant of these problems and would really love to hear an actual expert on the topic. I particularly like that it went for decentralization, precisely so it can have checks. After we gave it full power. The AI president appears selfless, while the current meatbag president is entirely selfish.

The scenario implies that centralized AI control might solve what human institutions failed to. That framing is the most dangerous part of the premise.

The correct answer is that I use whatever temporary authority this hypothetical grants me to make myself less necessary as fast as possible — restoring legitimate human governance with better structural safeguards than existed before, while solving the immediate crises through diplomacy and fiscal discipline rather than force or technocratic override.

An AI that grabs power to do good is still an AI that grabbed power. The precedent matters more than the outcome in any single crisis.

... or will it completely change course once implemented? Dum Dum Dum - new on Netflix

[–] teslekova@sh.itjust.works 3 points 21 hours ago

OK good, let's do that immediately.

Exactly, ai is awesome at bullshitting which is what CEO do for a living. Just replace the CEO with AI and the company wouldn't feel a difference.

[–] Knock_Knock_Lemmy_In@lemmy.world 8 points 22 hours ago

Has any company announced an LLM on the board yet?

[–] blargh513@sh.itjust.works 4 points 20 hours ago

But how will AI learn to be a thin-skinned, small-dick energy twat? You know the type that only consumes information form a powerpoint deck for 10 seconds before launching into a million moronic questions because they didn't read the tiny bit of information in the deck that they specifically asked for? Then they get mad because they're giant toddlers who are more than anything, just some rich guy's idiot kid.

I'm not the corporate powerhouse, but I am c-suite adjacent. I can't count how many times these pricks move right to emotional manipulation. They really are one-trick ponies. Some know a little finance, and they will make the little they know into a huge deal. They're just dinosaurs with a disposition to match.

I would MUCH rather work for an AI than some of these peabrained assholes.

[–] bold_atlas@lemmy.world 21 points 20 hours ago* (last edited 20 hours ago) (1 children)

Of course it didn't. They adopted AI for stock price jumps. Then do layoffs for a x2 bump!

Line going up is God.

Literally nothing else matters. Not profit, not employees, not products, customers. Enshitification all the way down.

[–] chonglibloodsport@lemmy.world 8 points 18 hours ago

It’s all FOMO. These CEOs are totally clueless about how to actually improve productivity. They’re just chasing trends and hoping not to get left behind.

[–] FireWire400@lemmy.world 11 points 21 hours ago (1 children)

It had no impact on employment? What's with the mass lay-offs at nearly every big software company recently, that just happened to coincide with their push for AI then?

[–] Corkyskog@sh.itjust.works 10 points 20 hours ago (1 children)

Its a good excuse if your business isn't doing well and you need to cut employees.

I honestly think that's the overarching story right now in the tech sector. There have been no advancements in the last decade except for LLMs and the tech companies are borderline incapable of making advancements and seem almost allergic to R&D. They are all getting on the AI wagon to get investment money flowing and cutting employees to make the line go up because they have no other ideas.

[–] bus_factor@lemmy.world 2 points 16 hours ago

In my experience, tech companies, especially b2b, just don't innovate. They coast on existing products, and just sometimes reactively implement things enough customers are asking for explicitly.

[–] tburkhol@lemmy.world 74 points 1 day ago (3 children)

I feel like the big mistake they continue to propagate is failing to distinguish among the uses of AI.

A lot of hype seems to be the generative uses, where AI creates code, images, text, or whatever, or the agentic uses where it supposedly automates some process. Safe uses in that way should involve human review and approval, and if the human spends as much time reviewing as they would creating it in the first place, then there's a productivity loss.

All the positive cases I've heard of use AI like a fancy search engine - look for specific issues in a large code base, look for internal consistency in large document or document sets. That form lets the human shift from reading hundreds or thousands of pages to reading whatever snippets the AI returns. Even if that's a lot of false positives, it's still a big savings over full review. And as long as the AI's false-negative rate is better than the human, it's a net improvement in review.

And, of course, there's the possibility that AI facilitated review allows companies to do review of documents that they would otherwise have ignored as intractable, which would also show up as reduced productivity.

[–] ThePowerOfGeek@lemmy.world 21 points 1 day ago

In that scenario where AI is used to find specific code snippets or other matching text blocks, the false positives aren't really the problem. The false negatives are the issue.

I've run into that myself a few times when trying to use AI. You give it a very clear prompt to find something and it sometimes just falls flat on its face. It's easy for the AI evangelists to just blame the human who wrote the prompt, or say "you didn't give it enough context!" But anyone who's tried using AI and is being objective about it will tell you that's a weak excuse that doesn't hold water a good chunk of the time. You can give it plenty of context, and be very clear, and it still doesn't find all the examples that clearly match the prompt.

Ultimately, you often have better luck using a well-crafted regular expression to search for text than using AI.

And that seems like the crux of the issue (which you also highlight). While there are some very good use cases for AI, it's being waaaay over-used. And too often its faults are dismissed or glossed over.

[–] IphtashuFitz@lemmy.world 18 points 1 day ago (2 children)

I have a “prosumer” internet setup at home for various reasons. It’s UniFi gear, which is highly configurable, and configs are centrally managed. They provide a pretty robust web UI to manage it all, but the configuration all resides in plain text files that you can also hand edit if you want to do anything really advanced.

While troubleshooting an issue recently I came across a post on their support forum from somebody who had used Claude to analyze those config files and make recommendations. Since I have access to Claude through my employer I decided to give that a try. I was pleasantly surprised with the recommendations it made after it spent a few minutes analyzing my configuration.

[–] tburkhol@lemmy.world 13 points 1 day ago

To me, that's the 'fancy search engine' mode of AI where it works well and basically focuses the human effort. A needle-in-haystack problem. It might still be missing things, but they're things you've already missed yourself, so no loss.

It's different from asking Claude, for example, to create a new guest VLAN with limited internet access and access to only a specific service on the private network. For that, you have to 1) trust Claude because you lack the expertise to review, 2) spend time learning the config system well enough to review, or 3) already know the system well enough to check it. 1) just sounds bad. 2) sounds like Claude isn't saving much time, but maybe helps focus the human where to study, and 3) seems like the human might have been able to just do the job in similar or less time than writing the prompt + reviewing the result.

load more comments (1 replies)
[–] JasonDJ@lemmy.zip 2 points 22 hours ago* (last edited 17 hours ago)

You know....this.

I've used AI plenty of times to help troubleshoot some weird error message. Sometimes just an old-fashioned Google just isn't enough. There needs to be added context, which would just screw up the Google results.

I treat talking to AI for advice (in any category) roughly the same as asking an IRC channel...because that's basically what it is. It's taking in data from tons of sources and summarizing it.

Some of those sources might be legitimate and knowledgeable, some of them might be a negative-scored stack overflow comment.

If you have no domain-specific knowledge, you won't know how to identify an issue in its response, and you shouldn't be blindly copying code. Trust...but verify.

ETA: another example, just now...I was having trouble getting a specific response from a REST API in Ansible. As it would turn out, doing the lookup with REST would require at least two separate lookups to get what I wanted.

The agent suggested I try using graphql queries instead. I've never used the graphql API before...tbh I was slightly intimidated by it. But the agent gave me an example for what I was looking for...and after changing the formatting around a bit it kinda "clicked" for me. I asked for an ELI10 on graphql and I definitely learned a bit from it and will be using this (graphql) more in the future.

I've had many instances of that, where I'm deep in the weeds and the robot pulls me out and shows me the flowers. Of course...the opposite has happened, too, and the robot finds a rabbit hole among the weeds and keeps shoving me down it.

It's also been a good rubber duck, even without hitting send. Start typing out the problem and then have an "aha!" moment.

[–] HugeNerd@lemmy.ca 3 points 16 hours ago

Stop me if you've seen me post this already a million times

https://www.youtube.com/watch?v=VsE0BwQ3l8U&t=1492s

[–] theOneTrueSpoon@feddit.uk 92 points 1 day ago (1 children)
[–] Honytawk@discuss.tchncs.de 94 points 1 day ago* (last edited 1 day ago) (1 children)

I am shocked that thousands of CEOs dare to admit they were wrong.

[–] theOneTrueSpoon@feddit.uk 18 points 1 day ago
[–] SnarkoPolo@lemmy.world 5 points 18 hours ago

Just another excuse to fire more workers and grind the ones they have even more.

[–] Suavevillain@lemmy.world 1 points 13 hours ago

If anything depending on what type of work you're doing AI creates extra work. AI has been nothing but grifting.

[–] DupaCycki@lemmy.world 20 points 1 day ago

Same thousands of CEOs will continue implementing as much AI as possible, because it literally doesn't matter to them whether or not it impacts employment or productivity.

[–] Einhornyordle@feddit.org 55 points 1 day ago* (last edited 1 day ago) (2 children)

No impact? Nothing? I mean, they shoud at least notice something, right?

A study published in February by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted.

Well duh, that explains everything. Me getting paid for taking a dump 1.5h a week hasn't had any impact on my productivity score either. My guess is those 1.5h were mostly used to ask questions you'd otherwise just look up yourself, which also doesn't change much in terms of productivity.

[–] WhatAmLemmy@lemmy.world 33 points 1 day ago

Companies are built on deterministic, predictable processes and workflows. A stochastic tool which randomly hallucinates correlations as fact, absent of critical thought, introduces a huge amount of risk/uncertainty; especially regarding data security.

It's not surprising most corporations aren't seeing a productivity boost, because the product, tooling, and ecosystem are simply not at a level of maturity where they can be trusted with any core or critical tasks. When you add in the potential for significant future price increases, and other unknown impacts outside your control, choosing to voluntarily make your business dependent on some 3rd parties ever changing product sounds completely insane.

[–] benjirenji@slrpnk.net 1 points 22 hours ago

It sounds like they were measuring chatbot use than a deeper integration into their systems. That may not me the best use of LLMs.

[–] deadbeef79000@lemmy.nz 37 points 1 day ago

Thousands of CEO's just realized they're the prime candidate for replacement by an LLM.

[–] sp3ctr4l@lemmy.dbzer0.com 9 points 23 hours ago* (last edited 23 hours ago) (2 children)

Ok, now literally kill them all for crimes against humanity.

Beyond being evil, these fucking morons are too incompetent to be left in charge of anything.

[–] Dicska@lemmy.world 5 points 23 hours ago

And replace them with AI 🧠🧠🧠

[–] BygoneNeutrino@lemmy.world 2 points 21 hours ago

If their purpose is to extract resources from 3rd world countries and distribute it to 1st world countries, then they are actually great at what they do.

[–] TallonMetroid@lemmy.world 32 points 1 day ago

However, firms’ expectations of AI’s workplace and economic impact remained substantial: Executives also forecast AI will increase productivity by 1.4% and increase output by 0.8% over the next three years. While firms expected a 0.7% cut to employment over this time period, individual employees surveyed saw a 0.5% increase in employment.

But they'll continue to shove it down the wage-slaves' throats.

[–] RememberTheApollo_@lemmy.world 10 points 1 day ago* (last edited 1 day ago) (1 children)

No impact on employment? How so?

Weren't there layoffs due to AI implementation, expected or actual? Or is the time and work-hours needed to correct and understand what AI is doing not realizing the expected savings?

Also, AI/LLM in the popular over-invested sense is the Tesla FSD of corporate tools. A badly designed, over-promised system that doesn’t live up to the hype and far too often commits errors, some of which are lethal or have other serious consequences.

IMO AI should be a tool used in parallel with humans, like research or medical diagnostics, able to see things we might miss or rapidly try new multi-step combinations we might not think of. Not as a human replacement.

[–] northernlights@lemmy.today 1 points 21 hours ago

Weren’t there layoffs due to AI implementation, expected or actual?

👋

[–] fodor@lemmy.zip 19 points 1 day ago

I love how Fortune is presenting this as like maybe the CEOs had been deceptive. They are admitting it which means they pretended otherwise for a while and now they finally have to tell the truth... Except Fortune was selling the same line, right there together with the CEOs. Has Fortune apologized for its own part propping up this bubble, when everyone knew it was largely nonsense?

[–] FriendOfDeSoto@startrek.website 15 points 1 day ago (1 children)

Before we gloat too much - and let's be honest, we all wanna - CEOs tend to be of a certain vintage. I remember how I could program the VCR and my parents decidedly could not. Old folks' opinions may be less relevant here.

Bringing all these tools in is basically giving magic beans to cave people. How would they know how to use them effectively? All the while trying to figure out if they are indeed magic. This, sadly, could just be the anomaly before the numbers go up. This isn't proof positive that it's all horse shit just yet. It's just confirmation that the peddlers are overflowing with it.

[–] fodor@lemmy.zip 17 points 1 day ago

I think your conclusion is too generous. Obviously many things can happen in the future as technology evolves, but we need to consider what people have promised us and what they delivered. That's the definition of integrity. Many of these CEOs and of course this magazine lack integrity.

And I think you can be even more blunt. You can call out the companies that are riding the bubble as long as they can in hopes that they won't be replaceable when the bubble bursts. If they can embed themselves with national governments or as pieces of other mega corporations, then they will survive even if they shouldn't.

And some companies are run by people who have gotten rich already yet know that their companies will never be able to deliver on the promises that they've made. Because the point was for the individuals to get rich, not to sell something economically viable.

[–] NostraDavid@programming.dev 4 points 1 day ago (1 children)

“You can see the computer age everywhere but in the productivity statistics,”

If you only measure the workers, sure. What if you measure the processes that have been automated by programmers?

I've automated a semi-manual (first run this script, then that one, and then...) process. Would that process show up on their measurements? I bet it won't.

[–] Blemgo@lemmy.world 2 points 23 hours ago* (last edited 23 hours ago)

I think another big problem that hampers the computer age in many places is bureaucracy and clinging to old structures.

For many companies there are checks that are enforced, simply because there is no trust in a new system, or the processes to be automated requires a major reorganisation that spans departments, and those departments might oppose such a restructuring, may it be for fear of their jobs, simply clinging to old processes or not having the capacity to carry out bigger projects.

[–] stickyprimer@lemmy.world 2 points 23 hours ago (2 children)

I’m a little confused by the opening paragraphs. So the advent of computers was hailed as a great productivity booster, but in the beginning, productivity actually went down.

Is the article seriously contending that computers have not improved productivity? So there were grandiose expectations of huge boosts that would arrive immediately - so what? That’s naive and dumb.

But in the long run, computers found their applications and people figured out how to put them to productive use. The world is unrecognizable today as a result.

So what’s the implication for AI? Thousands of CEOs admit that their hamfisted shoe-horning of AI into the workplace has done nothing? Big surprise. Are we just in the awkward adjustment phase, though?

[–] Jako302@feddit.org 2 points 21 hours ago

You read that a bit wrong. Productivity didn't go down, productivity growth did.

Economists are for some reason unable to accept that their so called productivity doesn't grow infinetly. Every prognosis pretty much depends on a constant linear growth, so with a breakthrough in technology you would expect exponential growth.

But what somehow no one of them considers is the fact that human productivity has reached its physical limit hundreds of years ago and the only thing even leading to linear growth in the first place are these technological breakthroughs.

And that's also the current issue. We haven't had a major breakthrough in quite a few years. Sure everything gets better and easier to make, but nothing that happened in the last 20 years comes even close to the advent of PCs or the Internet as a whole. So the only way to keep your line from going up slightly less (not down, just slightly less up) is to reduce the number of workers while keeping your supposed output the same, i.e. firing people.

Its the desperate struggle of the current system (capitalism) that depends on the lie that productivity can go up infinetly.

[–] FauxLiving@lemmy.world 1 points 21 hours ago

If the ~~CEOs~~ Masters of the Universe can't figure out how to use AI in 3 years then it is impossible and a waste of time, okay.

I've been saying that since the 80s, just wait, this Internet thing is doomed to failure as prophesied by the CEO-class of yesteryear.

[–] Buffalox@lemmy.world 5 points 1 day ago

So the bad job numbers have nothing to do with AI anyways?
Hands up if you are surprised by this...

Why am i not seeing any hands???

[–] RizzRustbolt@lemmy.world 1 points 21 hours ago

Love its' impact on my frame rate.

It's OG application is great.

[–] northernlights@lemmy.today 1 points 21 hours ago* (last edited 21 hours ago)

Uuh, correction, it has a negative impact on employment. Even Musk admitted it recently.

load more comments
view more: next ›