this post was submitted on 16 May 2026
53 points (98.2% liked)

Technology

84668 readers
7063 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] taiyang@lemmy.world 8 points 41 minutes ago* (last edited 38 minutes ago)

Sure as hell ain't my students; it's been a steady decline since ChatGPT came out and I think I may have failed more students than ever over unfinished projects. You can't GPT the semester long project, there's a paper trail and data to collect can it becomes super clear who is AI brained now...

Edit: PS, grade inflation has been a thing for a few decades now, btw; the As aren't the problem so much as the mush brain.

[–] OwOarchist@pawb.social 10 points 1 hour ago

The good students are still getting A grades naturally. And the bad students are getting A grades with ChatGPT. A grades for everybody! (Until we get to the closed-book, in-person test at the end of the year...)

[–] LodeMike@lemmy.today 10 points 1 hour ago (3 children)

Education needs to change. Including punishment for using LLMs.

[–] stardreamer@lemmy.blahaj.zone 5 points 31 minutes ago

We are allowing LLMs for all of our homeworks. As long as you can solve the problems in the indicated way with a reasonable answer.

In case you are not sure about the "indicated way", there are practice questions with detailed step-by-step solutions for each hw problem that you just have to change the numbers/equations a bit and you'll get points.

What we've noticed is that the year-after-year averages are significantly higher, especially this year. However, students are bringing in details that we explicitly didn't go over in lecture and putting that on the homework (e.g. Delayed branching in Computer Architecture, because it's a random quirk of MIPS that even assembly programmers don't have to deal with). None of these details are ever mentioned in lecture or the practice homeworks (in a few cases, they are mentioned with the explicit wording "do not worry about this now")

We can only assume people are copying the homework into LLMs and copying the results straight down. The latest exam had a question where students were asked to analyze a specific chunk of assembly code to deduce certain properties about it. Approximately 20-30% of the students didn't know the FORMAT to answer it, despite it literally being item 1 on last week's homework.

And when I say format, I don't mean exactly "you must write these exact words or you lose points". It's literally just point out "line A and B have this property X because of attribute Y". Just including ABXY as shown in the practice homework is enough. But apparently people are too lazy to read a 10 bullet point answer...

[–] Ghostalmedia@lemmy.world 3 points 30 minutes ago

As someone who works in ed tech these days, I’m kind of down for them as a study tool. For example, synthesizing notes and turning them into flashcards, practice tests, etc. I find that stuff to be suuuper handy if I’m trying to learn something.

But for cheating, yah, fuck that noise. A lot of classes are moving back to pencil and paper because of this, and I totally support that.

[–] OwOarchist@pawb.social 11 points 1 hour ago (1 children)

Before you can punish for using LLMs, you need to be able to reliably detect the use of LLMs, including guarding against false positives.

Current AI checkers are woefully inadequate and prone to errors.

[–] grue@lemmy.world 2 points 4 minutes ago

Before you can punish for using LLMs, you need to be able to reliably detect the use of LLMs, including guarding against false positives.

You can tell they're using an LLM if they have a computer out during the pen-and-paper test.

[–] panda_abyss@lemmy.ca 10 points 1 hour ago

Surely this won’t cause any problems at all