this is a good use case for AI, in my opinion.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Yep, using it for menial zero/low-impact grunt work that nobody wants to do
Using AI to find errors that can then be independently verified sounds reasonable.
The danger would be in assuming that it will find all errors, or that an AI once-over would be "good enough". This is what most rich AI proponents are most interested in, after all; a full AI process with as few costly humans as possible.
The lesser dangers would be 1) the potential for the human using the tool to lose or weaken their own ability to find bugs without external help and 2) the AI finding something that isn't a bug, and the human "fixing" it without a full understanding that it wasn't wrong in the first place.
AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.
I can't help but to always be a bit skeptical, when reading something like this. To me it's akin to having to do calculations manually, but there's a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer's, like 99% of the time? Wouldn't (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?
Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.
And how exactly do you enforce that? It seems like you're just shifting the problem.
Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.
I mean, there's hallucination concerns, there's licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.
Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.
If Microsoft itself, would be the saboteur, you'd be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.
For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.