this post was submitted on 12 Apr 2026
445 points (94.2% liked)

Technology

83725 readers
3889 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] GreenBeanMachine@lemmy.world 3 points 1 hour ago

I guess even smart people can make stupid decisions. Probably financially motivated decisions too.

[–] webkitten@piefed.social 5 points 1 hour ago

Linux kernel being written by Microsoft's AI.

[–] SethTaylor@lemmy.world 7 points 4 hours ago* (last edited 3 hours ago) (3 children)

Bad actors submitting garbage code aren't going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines.

"Guns don't kill people. People kill people"

Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.

The author should elaborate on how exactly AI is like "a specific brand of keyboard". Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like "a specific brand of keyboard", does that mean my brain is also a "specific brand of keyboard"?

I get their point. If you want to create good code by having AI create bad code and then spending twice the time to fix it, feel free to do that. But I'm in favor of a complete ban.

[–] Simulation6@sopuli.xyz 6 points 2 hours ago

The keyboard thing is sort of a parable, it is as difficult to determine if code was generated in part by AI as it is to determine what keyboard was used to create it.

[–] Shayeta@feddit.org 4 points 3 hours ago* (last edited 2 hours ago)

AI is a useful tool for coding as long as it's being used properly. The problem isn't the tool, the problem is the companies who scraped the entire internet, trained LLM models, and then put them behind paywalls with no options to download the weights so that they could be self-hosted. Brazen, unaccountable profiteering off of the goodwill of many open source projects without giving anything back.

If LLMs were community-trained on available, open-source code with weights freely available for anyone to host there wouldn't be nearly as much animosity against the tech itself. The enemy isn't the tool, but the ones who built the tool at the expense of everyone and are hogging all the benefits.

[–] ede1998@feddit.org 2 points 3 hours ago

Last I checked a keyboard only enters what I type

I've had (broken) keyboard "hallucinate" extra keystrokes before, because of stuck keys. Or ignore keypresses. But yeah, that means the keyboard is broken.

[–] sonofearth@lemmy.world 21 points 6 hours ago (1 children)

I am the c/fuck_ai person but at this point I have made peace we can't avoid it. I still don't want it to do artsy stuff (image gen, video gen) and to blindly use it in critical stuff because humans are the ones that should be doing it or have constant oversight. I think the team's logic is correct here, because there is no way to know if the code is from an LLM or a human unless something there screams LLM or the contributor explicitly mentions it. Mandating the latter seems like a reasonable move for now.

[–] DaleGribble88@programming.dev 5 points 4 hours ago (1 children)

I consider myself to be more pro AI than not, but I'm certainly not a zealot and mostly agree with the take that it shouldn't be used in artistic pursuits. However, I love using AI to help me create art. It can give great critiques, often good advice on how to improve, and is great for rapid experimentation and prototyping. I actually used it this weekend to see what a D&D mini might look like with different color schemes before painting it. I could have done the same with Gimp, but it would have taken much longer for worse results that was ultimately just for a brain storming session. How do you feel about my AI usage from your perspective? I suppose from an energy conservation perspective, all of it was bad, but I'm more interested in a less trivial take.

[–] sonofearth@lemmy.world 3 points 3 hours ago

Yes the energy consumption is bad. My main gripe about LLM generated art is that it will not be original. It will use its training data from uncredited artworks to generate it. Art usually is made by humans to express something or convey something in a creative way. LLMs fail at that. What LLMs can actually be helpful at is making learning art more accessible to everyone. Art schools or private art classes can be expensive. This lowers the barrier to entry.

As for you using generated Art is that the it might be really beautiful but it will be very difficult to maintain that style and even more difficult to convince that it is your style. The Artist doesn’t get much recognition with LLM generated art. Using it as a critique also seems stupid because LLMs will aways try to give an objective view on it than subjective. Your art won’t trigger an emotion in it and might say it is bad or “do this to make it more understandable” — that’s where you lose as an artist.

My mom likes to paint as a hobby. What she does it searches stuff on Pinterest (which is mostly LLM Generated). She uses it as an inspiration to do it in her own style and maybe give it some spin. She keeps all of it for herself.

[–] NewNewAugustEast@lemmy.zip 35 points 10 hours ago* (last edited 10 hours ago) (1 children)

Copilot? You mean the AI with terms of service that are in bold and explicit: "for entertainment purposes only"?

Which is why its in the title and not the article? EntertainBait?

[–] Zacryon@feddit.org 15 points 9 hours ago (1 children)

I suppose GitHub Copilot is meant, which is a different thing.

[–] Senal@programming.dev 6 points 8 hours ago (1 children)

Different how, isn't github owned by microsoft ?

[–] lepinkainen@lemmy.world 21 points 8 hours ago (2 children)

There are like 70 copilots

[–] Senal@programming.dev 3 points 1 hour ago

Ok, so there are 70-81 copilots, github is one of them.

Why is github copilot a different thing in the context of the reply that was being responded to ?

[–] ThinkyMcThinkface@lemmy.zip 12 points 7 hours ago (1 children)
[–] Diurnambule@jlai.lu 6 points 6 hours ago (1 children)

The hell. How can they expect people to understand ? They plan to sell 100 things under the same name and try to sell it as one big AI when it is hundred of différents things unrelated ?

[–] Squizzy@lemmy.world 1 points 2 hours ago

Most of those are bundled, no one is buying copilot fot OneNote they just get it when the get the rest of that suite.

[–] hperrin@lemmy.ca 21 points 9 hours ago

There are so many reasons not to include any AI generated code.

https://sciactive.com/human-contribution-policy/#Reasoning

[–] CanIFishHere@lemmy.ca 39 points 11 hours ago (3 children)

AI is here, another tool to use...the correct way. Very reasonable approach from Torvalds.

[–] Newsteinleo@infosec.pub 20 points 10 hours ago (4 children)

I don't have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can't fix a sentence in a slide deck without using an LLM.

It's the people that try to use LLMs for things outside their domain of expertise that really cause the problems.

[–] InternetCitizen2@lemmy.world 6 points 7 hours ago

This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.

[–] NotMyOldRedditName@lemmy.world 3 points 7 hours ago* (last edited 7 hours ago) (1 children)

It's the people that try to use LLMs for things outside their domain of expertise that really cause the problems.

That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn't want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.

[–] Newsteinleo@infosec.pub 1 points 1 hour ago

I'm talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.

load more comments (2 replies)
[–] null@lemmy.zip 6 points 10 hours ago

Clickbait got me. No mention of "Yes copilot" which I assumed was a joke anyway.

[–] oyzmo@lemmy.world 2 points 8 hours ago

👆🏻true

[–] peacefulpixel@lemmy.world 26 points 11 hours ago

"yes to copilot no to AI slop" lol lmfao

[–] gandalf_der_12te@discuss.tchncs.de 26 points 13 hours ago* (last edited 13 hours ago) (1 children)

I agree. If AI becomes outlawed, it will simply be used without other people knowing about it.

This approach, at least, means that people will label AI-generated code as such.

[–] emmy67@lemmy.world 18 points 13 hours ago

Maybe. There's still strong disapproval around it. I can imagine many will still hide it.

[–] null@lemmy.org 37 points 15 hours ago (4 children)

Ah, the solution that recognizes there's no way to eliminate AI from the supply chain after it's already been introduced.

load more comments (4 replies)
[–] Blue_Morpho@lemmy.world 215 points 19 hours ago (9 children)

The title of the article is extraordinary wrong that makes it click bait.

There is no "yes to copilot"

It is only a formalization of what Linux said before: All AI is fine but a human is ultimately responsible.

" AI agents cannot use the legally binding "Signed-off-by" tag, requiring instead a new "Assisted-by" tag for transparency"

The only mention of copilot was this:

"developers using Copilot or ChatGPT can't genuinely guarantee the provenance of what they are submitting"

This remains a problem that the new guidelines don't resolve. Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.

[–] Fmstrat@lemmy.world 1 points 1 hour ago

Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.

I get why they are passing this by though, since you don't know the provenance of that Stack Overflow snippet, either.

[–] lechekaflan@lemmy.world 2 points 3 hours ago

The title of the article is extraordinary wrong that makes it click bait.

It's the pain in the ass with some of those fucking tech/video/showbiz news outlets and then rules in some fora where you cannot make "editorialized" post titles, even though it's so tempting to correct the awful titling.

load more comments (7 replies)
[–] theherk@lemmy.world 131 points 20 hours ago (21 children)

Seems like a reasonable approach. Make people be accountable for the code they submit, no matter the tools used.

load more comments (21 replies)
[–] catlover@sh.itjust.works 51 points 17 hours ago (9 children)

I'd still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn't even read the code, and i have to go through all the slop

load more comments (9 replies)
[–] Emergency_dildo@lemmy.org 2 points 9 hours ago
load more comments
view more: next ›