this post was submitted on 24 May 2024
291 points (97.7% liked)

Technology

59495 readers
3114 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

archive.is

Shall we trust LM defining legal definitions, deepfake in this case? It seems the state rep. is unable to proof read the model output as he is "really struggling with the technical aspects of how to define what a deepfake was."

you are viewing a single comment's thread
view the rest of the comments
[–] webghost0101@sopuli.xyz 50 points 6 months ago* (last edited 6 months ago) (2 children)

I understand the irony. But can we not pretend they blindly used an output or even generated a full page. It was a specific section to provide a technical definition of “what is a deepfake”.

“I was really struggling with the technical aspects of how to define what a deepfake was. So I thought to myself, ‘Well, why not ask the subject matter expert (i do not agree with that wording, lol) , ChatGPT?’” Kolodin said. 

The legislator from Maricopa County said he “uploaded the draft of the bill that I was working on and said, you know, please, please put a subparagraph in with that definition, and it spit out a subparagraph of that definition.”

“There’s also a robust process in the Legislature,” Kolodin continued. “If ChatGPT had effed up some of the language or did something that would have been harmful, I would have spotted it, one of the 10 stakeholder groups that worked on or looked at this bill, the ACLU would have spotted, the broadcasters association would have spotted it, it would have got brought out in committee testimony.”

But Kolodin said that portion of the bill fared better than other parts that were written by humans. “In fact, the portion of the bill that ChatGPT wrote was probably one of the least amended portions,” he said.

I do not agree on his statement that any mistakes made by ai could also be made by humans. The reasoning and errors in reasoning is quite different in my experience but the way chatgpt was used is absolutely fair.

[–] circuscritic@lemmy.ca 11 points 6 months ago* (last edited 6 months ago)

No kidding. When I read that, my first thought was, "He's clearly at least above the median intelligence of his fellow Arizona GOP reps, if not in the top 10% of their entire conference"

Anyone who read the article AND has experience with the Arizona GOP, probably thought the same thing.

The Arizona GOP collects some of the dumbest people alive.

[–] Liz@midwest.social 10 points 6 months ago (1 children)

I get this feeling this will generally be the peak of generative AI. Used for assistance when needed and with lots of oversight. The problem is that not all people bother to check the AI's work.

[–] slurpinderpin@lemmy.world 8 points 6 months ago

That’s the point, literally. These tools don’t make some idiot all of a sudden a genius. It’s for already competent experts to expedite their work. They are the oversight