this post was submitted on 08 Jan 2024
405 points (96.1% liked)
Technology
59569 readers
3195 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If the point is to prove that the model contains an encoded version of the original article, and you make the model spit out the entire thing by just giving it the first paragraph or two, I don't see anything wrong with such a proof.
Your previous comment was suggesting that the entire article (or most of it) was included in the prompt/context, and that the part generated purely by the model was somehow generic enough that it could have feasibly been created without having an encoded/compressed/whatever version of the entire article somewhere.
Which does not appear to be the case.
I haven't really picked a side, mostly because there's just not enough evidence. NYT hasn't provided any of the prompts they used to prove their claim. The OpenAI blog post seems to make suggestions about what happened, but they're obviously biased.
If the model spits out an original article by just providing a single paragraph, then the NYT has a case. If like OpenAI says that part of the prompt were lengthy excerpt, and the model just continued with the same style and format, then I don't think they have a case.