this post was submitted on 06 Sep 2024
1726 points (90.1% liked)
Technology
61227 readers
4347 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Excuse me, what? You think Huggingface is hosting 100's of checkpoints each of which are multiples of their training data, which is on the order of terabytes or petabytes in disk space? I don't know if I agree with the compression argument, myself, but for other reasons--your retort is objectively false.
Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.
*Did some more looking, and that model size estimate assumes 32 bit float. It's actually 16 bit, so the model size is 350GB... technically some compression after all!