I guess even smart people can make stupid decisions. Probably financially motivated decisions too.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Linux kernel being written by Microsoft's AI.
Bad actors submitting garbage code aren't going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines.
"Guns don't kill people. People kill people"
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.
The author should elaborate on how exactly AI is like "a specific brand of keyboard". Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like "a specific brand of keyboard", does that mean my brain is also a "specific brand of keyboard"?
I get their point. If you want to create good code by having AI create bad code and then spending twice the time to fix it, feel free to do that. But I'm in favor of a complete ban.
The keyboard thing is sort of a parable, it is as difficult to determine if code was generated in part by AI as it is to determine what keyboard was used to create it.
AI is a useful tool for coding as long as it's being used properly. The problem isn't the tool, the problem is the companies who scraped the entire internet, trained LLM models, and then put them behind paywalls with no options to download the weights so that they could be self-hosted. Brazen, unaccountable profiteering off of the goodwill of many open source projects without giving anything back.
If LLMs were community-trained on available, open-source code with weights freely available for anyone to host there wouldn't be nearly as much animosity against the tech itself. The enemy isn't the tool, but the ones who built the tool at the expense of everyone and are hogging all the benefits.
Last I checked a keyboard only enters what I type
I've had (broken) keyboard "hallucinate" extra keystrokes before, because of stuck keys. Or ignore keypresses. But yeah, that means the keyboard is broken.
I am the c/fuck_ai person but at this point I have made peace we can't avoid it. I still don't want it to do artsy stuff (image gen, video gen) and to blindly use it in critical stuff because humans are the ones that should be doing it or have constant oversight. I think the team's logic is correct here, because there is no way to know if the code is from an LLM or a human unless something there screams LLM or the contributor explicitly mentions it. Mandating the latter seems like a reasonable move for now.
I consider myself to be more pro AI than not, but I'm certainly not a zealot and mostly agree with the take that it shouldn't be used in artistic pursuits. However, I love using AI to help me create art. It can give great critiques, often good advice on how to improve, and is great for rapid experimentation and prototyping. I actually used it this weekend to see what a D&D mini might look like with different color schemes before painting it. I could have done the same with Gimp, but it would have taken much longer for worse results that was ultimately just for a brain storming session. How do you feel about my AI usage from your perspective? I suppose from an energy conservation perspective, all of it was bad, but I'm more interested in a less trivial take.
Yes the energy consumption is bad. My main gripe about LLM generated art is that it will not be original. It will use its training data from uncredited artworks to generate it. Art usually is made by humans to express something or convey something in a creative way. LLMs fail at that. What LLMs can actually be helpful at is making learning art more accessible to everyone. Art schools or private art classes can be expensive. This lowers the barrier to entry.
As for you using generated Art is that the it might be really beautiful but it will be very difficult to maintain that style and even more difficult to convince that it is your style. The Artist doesn’t get much recognition with LLM generated art. Using it as a critique also seems stupid because LLMs will aways try to give an objective view on it than subjective. Your art won’t trigger an emotion in it and might say it is bad or “do this to make it more understandable” — that’s where you lose as an artist.
My mom likes to paint as a hobby. What she does it searches stuff on Pinterest (which is mostly LLM Generated). She uses it as an inspiration to do it in her own style and maybe give it some spin. She keeps all of it for herself.
Copilot? You mean the AI with terms of service that are in bold and explicit: "for entertainment purposes only"?
Which is why its in the title and not the article? EntertainBait?
I suppose GitHub Copilot is meant, which is a different thing.
Different how, isn't github owned by microsoft ?
There are like 70 copilots
Ok, so there are 70-81 copilots, github is one of them.
Why is github copilot a different thing in the context of the reply that was being responded to ?
The hell. How can they expect people to understand ? They plan to sell 100 things under the same name and try to sell it as one big AI when it is hundred of différents things unrelated ?
Most of those are bundled, no one is buying copilot fot OneNote they just get it when the get the rest of that suite.
There are so many reasons not to include any AI generated code.
AI is here, another tool to use...the correct way. Very reasonable approach from Torvalds.
I don't have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can't fix a sentence in a slide deck without using an LLM.
It's the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.
It's the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn't want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.
I'm talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.
Clickbait got me. No mention of "Yes copilot" which I assumed was a joke anyway.
👆🏻true
"yes to copilot no to AI slop" lol lmfao
I agree. If AI becomes outlawed, it will simply be used without other people knowing about it.
This approach, at least, means that people will label AI-generated code as such.
Maybe. There's still strong disapproval around it. I can imagine many will still hide it.
Ah, the solution that recognizes there's no way to eliminate AI from the supply chain after it's already been introduced.
The title of the article is extraordinary wrong that makes it click bait.
There is no "yes to copilot"
It is only a formalization of what Linux said before: All AI is fine but a human is ultimately responsible.
" AI agents cannot use the legally binding "Signed-off-by" tag, requiring instead a new "Assisted-by" tag for transparency"
The only mention of copilot was this:
"developers using Copilot or ChatGPT can't genuinely guarantee the provenance of what they are submitting"
This remains a problem that the new guidelines don't resolve. Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.
Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.
I get why they are passing this by though, since you don't know the provenance of that Stack Overflow snippet, either.
The title of the article is extraordinary wrong that makes it click bait.
It's the pain in the ass with some of those fucking tech/video/showbiz news outlets and then rules in some fora where you cannot make "editorialized" post titles, even though it's so tempting to correct the awful titling.
Seems like a reasonable approach. Make people be accountable for the code they submit, no matter the tools used.
I'd still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn't even read the code, and i have to go through all the slop
Hello