this post was submitted on 01 Apr 2024
199 points (97.2% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.

Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.

top 12 comments
sorted by: hot top controversial new old
[–] penquin@lemm.ee 53 points 7 months ago (3 children)

I asked several AIs to write unit and integration tests for my code, and they all literally failed every single time. Some just straight up garbage, others just come up with shit I don't even have in my code. AI is really good if you know what you're doing and can spot what's right and what's wrong. Blindly taking its code is just useless, and dangerous, too.

[–] residentmarchant@lemmy.world 18 points 7 months ago (1 children)

I find if I write one or two tests on my own then tell Copilot to complete the rest of them it's like 90% correct.

Still not great but at least it saves me typing a bunch of otherwise boilerplate unit tests.

[–] penquin@lemm.ee 8 points 7 months ago

I actually haven't tried it this way. I just asked it to write the tests for whatever class I was on and it started spitting some stuff at me. I'll try your way and see.

[–] datavoid@lemmy.ml 13 points 7 months ago

"Remember kids, you don't need to learn to program!"

[–] anlumo@lemmy.world 3 points 7 months ago

It’s a matter of learning how to prompt it properly. It’s not a human and thus needs a different kind of instructions.

[–] conciselyverbose@sh.itjust.works 27 points 7 months ago (1 children)

Imagine that.

"Writing" code you don't understand is dangerous.

[–] anteaters@feddit.de 2 points 7 months ago

AI is the funniest shit - even better that NFTs.

[–] DingoBilly@lemmy.world 22 points 7 months ago (1 children)

And someone recently told me the Xz exploit doesn't matter because no developer is stupid enough to install beta releases to prod systems lol.

Laziness and/or low skills leads to a lot of IT failures.

[–] lambda_notation@lemmy.ml 1 points 7 months ago

Another way of looking at it; If there exist a release then it will be deployed to prod.

[–] 0xvalentin@lemmy.sdf.org 9 points 7 months ago

I am lazy too and admittedly have copy pasted code from an LLM but adding a new dependency without looking at a package registry seems crazy to me.

[–] autotldr@lemmings.world 4 points 7 months ago

This is the best summary I could come up with:


In-depth Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.

Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned.

He created huggingface-cli in December after seeing it repeatedly hallucinated by generative AI; by February this year, Alibaba was referring to it in GraphTranslator's README instructions rather than the real Hugging Face CLI tool.

Last year, through security firm Vulcan Cyber, Lanyado published research detailing how one might pose a coding question to an AI model like ChatGPT and receive an answer that recommends the use of a software library, package, or framework that doesn't exist.

The willingness of AI models to confidently cite non-existent court cases is now well known and has caused no small amount of embarrassment among attorneys unaware of this tendency.

As Lanyado noted previously, a miscreant might use an AI-invented name for a malicious package uploaded to some repository in the hope others might download the malware.


The original article contains 1,143 words, the summary contains 190 words. Saved 83%. I'm a bot and I'm open source!

[–] aCatNamedVirtute@lemmy.world 2 points 7 months ago

Sorry, what?