this post was submitted on 14 Apr 2025
220 points (92.3% liked)

Technology

69098 readers
4321 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

top 50 comments
sorted by: hot top controversial new old
[–] hansolo@lemm.ee 91 points 1 week ago (4 children)

It's just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.

Does anyone read more than the headline? OP even said this in the summary.

[–] cyrano@lemmy.dbzer0.com 24 points 1 week ago (1 children)

I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.

[–] hansolo@lemm.ee 8 points 1 week ago (1 children)

If you've never used a custom LLM or wrapper for regular ol' ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it's trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?

[–] cyrano@lemmy.dbzer0.com 7 points 1 week ago

I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.

[–] null_dot@lemmy.dbzer0.com 7 points 1 week ago

It depends what purpose that paperwork is intended for.

If the regulatory paperwork it's managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.

Learning and understanding is hard work. An LLM can't do that for you.

Sure it can summarise instructions for you to show you what's more pertinent in a given instance, but is that the same as someone who knows what to do because they've been wading around in the logs and regs for the last decade?

It seems like, whether you're using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.

As always, there's a risk that a user just won't identify a problem in the information produced.

I don't think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.

[–] iAvicenna@lemmy.world 4 points 1 week ago (1 children)

NOOOOOO ITS DOING NUCLEAR PHYSICS!!!!!!!!111

[–] hansolo@lemm.ee 7 points 1 week ago (1 children)

It's eating the rods, it's eating the ions!

[–] nieminen@lemmy.world 2 points 1 week ago (1 children)
[–] iAvicenna@lemmy.world 3 points 1 week ago (1 children)

I unfortunately don't can someone explain?

[–] nieminen@lemmy.world 2 points 6 days ago (1 children)
[–] iAvicenna@lemmy.world 2 points 6 days ago

Oh shit had already forgotten about this amid so many other scandals. The guy who said this is running the whole of US like a fucking medieval kingdom, another reality slap in the face. At that time I was like, "surely no one right in the mind would vote for this scammer".

load more comments (1 replies)
[–] dumbass@leminal.space 62 points 1 week ago (4 children)

Huh, it is really Russian roulette with how we're all gonna die, could be WW3, could be another pandemic or could a bunch of AIs hallucinating and causing multiple nuclear meltdowns.

[–] Deceptichum@quokk.au 4 points 1 week ago

Don’t forget he inevitable climate change.

[–] prex@aussie.zone 2 points 1 week ago

I can only hope my bingo card somehow explodes & kills me.

[–] scarabic@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

It’s literally just a document search for their internal employees to use.

Those employees are fallible humans trying to navigate tens of thousands of byzantine technical and regulatory documents all published on various dinosaur platforms.

AI hallucination is a very popular thing to get outraged about right now but don’t forget about good old fashioned bureaucratic error.

My employer implemented AI search/summarization of our docs/wiki/intranet/JIRA systems over a year ago and it has been very effective in my experience. It always links to the source docs, but it permits natural language queries and can do some reasoning about the contents of the documents to pull together information across a sea of text.

Nothing that is mission critical enough to lead to a reactor meltdown should ever be blindly trusted to these tools.

But nothing like that should ever be trusted to the whims of one fallible human, either. This is why systems have protocols, checks and balances, quality controls, and failsafes.

Giving employees a more powerful document search doesn’t somehow sweep all that aside.

But hey, don’t let a rational, down-to-earth argument stand in the way of freaking out about a sci-fi dystopia.

[–] besselj@lemmy.ca 48 points 1 week ago (1 children)

The LLM told me that control rods were not necessary, so it must be true

[–] twice_hatch@midwest.social 10 points 1 week ago

The chatbot said 3.6 Roentgen is just fine and the core cannot have exploded, maybe we heard a truck driving by

[–] cyrano@lemmy.dbzer0.com 42 points 1 week ago
[–] MuskyMelon@lemmy.world 30 points 1 week ago (2 children)

Finally we get the sequel to "Chernobyl" ... Based in America...

[–] Slotos@feddit.nl 9 points 1 week ago

Live action at that

[–] Evil_Shrubbery@lemm.ee 6 points 1 week ago

They made the prequel already - wiki/Three_Mile_Island_accident.

[–] BombOmOm@lemmy.world 22 points 1 week ago

Can we not have the lying bots teaching people how to run a nuclear plant?

[–] Sterile_Technique@lemmy.world 17 points 1 week ago

Diablo Canyon

The nuclear power plant run by AI slop is located in a region called "Diablo Canyon".

Right. We sure this isn't an Onion article? ...actually no, it couldn't be, The Onion's writers aren't that lazy.

Fuckin whatever, I'm done for the night. Gonna head over to Mr. Sandman's squishy rectangle. ...bet you'll never guess what I'm gonna do there!!

[–] DogPeePoo@lemm.ee 12 points 1 week ago* (last edited 1 week ago)

What could possibly go wrong?

[–] jaybone@lemmy.zip 12 points 1 week ago (2 children)
[–] pyre@lemmy.world 3 points 1 week ago

using AI in a nuclear plant at Diablo Canyon... it's so on the nose you'd say it's lazy writing if it were part of the backstory of some scifi novel.

[–] hansolo@lemm.ee -2 points 1 week ago (1 children)

Well, considering it's exclusively for paperwork and compliance, the worst that can happen is someone might rely on it too much and file incorrect, I dunno, license renewal with the DOE and be asked to do it again.

Ah. The horror.

[–] pivot_root@lemmy.world 12 points 1 week ago (1 children)

When it comes to compliance and regulations, anything with the literal blast radius of a nuclear reactor should not be trusted to LLM unless double or triple checked by another party familiar with said regulations. Regulations were written in blood, and an LLM hallucinating a safety procedure or operating protocol is a disaster waiting to happen.

I have less qualms about using it for menial paperwork, but if the LLM adds an extra round-trip to a form, it's not just wasting the submitter's time, but other people's as well.

[–] hansolo@lemm.ee -1 points 1 week ago (1 children)

All the errors you know about in the nuclear power industry are human-caused.

Is this an industry with a 100% successful operation rate? Not at all.

But have you ever heard of a piece of paperwork with an error submitted to regulatory officials and lawyers outside the plant causing a critical issue inside the plant? I sure haven't. Please feel free to let me know if you are aware of such an incident.

I would encourage you to learn more about how LLM and SLM structures work. This article is more of a nothingburger superlative clickbait IMO. To me, at least it appears to be airgapped if it's running locally, which is nice.

I would bet money that this will be entirely managed by the most junior compliance person who is not 120 years old, with more senior folks cross checking it with more suspicion than they would a new hire.

[–] gedhrel@lemmy.world 8 points 1 week ago

I'm not sure if that opening sentence is fatuous or not. What errors in any industrial enterprise are not human in origin?

[–] pyre@lemmy.world 8 points 1 week ago* (last edited 1 week ago) (2 children)

to people who say it's just paperwork or whatever it doesn't matter: this is how it begins. they'll save a couple cents here and there and they'll want to expand this.

[–] Takumidesh@lemmy.world 3 points 1 week ago

Also, it's not like the paperwork isn't important.

[–] scarabic@lemmy.world 0 points 1 week ago (3 children)

That’s textbook slippery slope logical fallacy.

[–] Objection@lemmy.ml 2 points 1 week ago

Slippery slope arguments aren't inherently fallicious.

[–] pyre@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (1 children)

it's not actually. there's barely an intermediate step between what's happening now and what I'm suggesting it will lead to.

this is not "if we allow gay marriage people will start marrying goats". it's "if this company is allowed to cut corners here they'll be cutting corners in other places". that's not a slope; it's literally the next step.

slippery slope fallacy doesn't mean you're not allowed to connect A to B.

[–] scarabic@lemmy.world 0 points 1 week ago (1 children)

You may think it’s as plausible as you like. Obviously you do or you wouldn’t have said it. It’s still by definition absolutely a slippery slope logical fallacy. A little will always lead to more, therefore a little is a lot. This is textbook. It has nothing to do with companies, computers, or goats.

[–] pyre@lemmy.world 0 points 1 week ago

this is textbook fallacy fallacy

[–] TheOakTree@lemm.ee 1 points 1 week ago (1 children)

True, but it you change the argument from "this will happen" to "this with happen more frequently" then it's still a very reasonable observation.

[–] scarabic@lemmy.world 1 points 1 week ago

All predictions in this vein are invalid.

If you want to say “even this little bit is unsettling and we should be on guard for more,” fine.

That’s different from “if you think this is only a small amount you are wrong because a small amount will become a large amount.”

[–] Tea@programming.dev 5 points 1 week ago
[–] Goretantath@lemm.ee 4 points 1 week ago

Fucking christ..

[–] NarrativeBear@lemmy.world 3 points 1 week ago

SkyNet is fully operational, operating at 60 teraflops.

[–] werefreeatlast@lemmy.world 1 points 1 week ago

Dave, I don't known what to tell you but you can't come in alright?