this post was submitted on 16 May 2024
367 points (97.9% liked)

Technology

59589 readers
2838 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD's licensing goals) and cannot be committed to NetBSD.

https://www.NetBSD.org/developers/commit-guidelines.html

you are viewing a single comment's thread
view the rest of the comments
[–] best_username_ever@sh.itjust.works 12 points 6 months ago (2 children)

It’s actually simple to detect: if the code sucks or is written by a bad programmer, and the docstrings are perfect, it’s AI. I’ve seen this more than once and it never fails.

[–] Zos_Kia@lemmynsfw.com 4 points 6 months ago (3 children)

I'm confused, do people really use copilot to write the whole thing and ship it without re reading?

I literally did an interview that went like this:

  1. Applicant used copilot to generate nontrivial amounts of the code
  2. Copilot generated the wrong code for a key part of the algorithm; applicant didn't notice
  3. We pointed it out, they fixed it
  4. They had to refactor the code a bit, and ended up making the same exact mistake again
  5. We pointed out the error again...

And that's in an interview, where you should be extra careful to make a good impression...

[–] neclimdul@lemmy.world 4 points 6 months ago

Not specific to AI but someone flat out told me they didn't even run the code to see it work. They didn't understand why I would or expect that before accepting code. This was someone submitting code to a widely deployed open source project.

So, I would expect the answer is yes or very soon to be yes.

[–] best_username_ever@sh.itjust.works 1 points 6 months ago* (last edited 6 months ago) (1 children)

Around me, most beginners who use that don't have the skills to understand or even test what they get. They don't want to learn I guess, ChatGPT is easier.

I recently suspected a new guy was using ChatGPT because everything seemed perfect (grammar, code formatting, classes made with design patterns, etc.) but the code was very wrong. So I did some pair programming with him and asked if we could debug his simple application. He didn't know where the debug button was.

[–] Zos_Kia@lemmynsfw.com 2 points 6 months ago (1 children)

Guilty as charged, ten years into the job and I never learned to use a debugger lol.

Seriously though that's amazing to me I never met one of those... I guess 95% of them will churn out of the industry in less than five years...

[–] Tja@programming.dev 1 points 6 months ago* (last edited 6 months ago)

Debug button? There is a button that inserts 'printf("%s:%s boop! \n" , __FUNCTION__, __LINE__) ;'?

[–] TimeSquirrel@kbin.social 1 points 6 months ago* (last edited 6 months ago) (1 children)

So your results are biased, because you're not going to see the decent programmers who are just using it to take mundane tasks off their back (like generating boilerplate functions) while staying in control of the logic. You're only ever going to catch the noobs trying to cheat without fully understanding what it is they're doing.

You're only ever going to catch the noobs.

That’s the fucking point. Juniors must learn, not copy paste random stuff. I don’t care what seniors do.