this post was submitted on 29 May 2024
1673 points (99.6% liked)

Technology

59605 readers
4202 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] uis@lemm.ee 7 points 5 months ago (13 children)
[–] vithigar@lemmy.ca 26 points 5 months ago* (last edited 5 months ago) (5 children)

That wouldn't distribute the load of storing it though. Anyone on the torrent would need to set aside 100PBs of storage for it, which is clearly never going to happen.

You'd want a federated (or otherwise distributed) storage scheme where thousands of people could each contribute a smaller portion of storage, while also being accessible to any federated client. 100,000 clients each contributing 1TB of storage would be enough to get you one copy of the full data set with no redundancy. Ideally you'd have more than that so that a single node going down doesn't mean permanent data loss.

[–] uis@lemm.ee 6 points 5 months ago* (last edited 5 months ago) (2 children)

That wouldn't distribute the load of storing it though. Anyone on the torrent would need to set aside 100PBs of storage for it, which is clearly never going to happen.

Torrents are designed for incomplete storage of data. You can store and verify few chunks without any problem.

You'd want a federated (or otherwise distributed) storage scheme where thousands of people could each contribute a smaller portion of storage, while also being accessible to any federated client.

Torrents. You may not have entirety of data, but you can request what you need from swarm. The only limitation is you need to know in which chunk data you need.

Ideally you'd have more than that so that a single node going down doesn't mean permanent data loss.

True.

[–] vithigar@lemmy.ca 4 points 5 months ago (1 children)

True. Until you responded I actually completely forgot that you can selectively download torrents. Would be nice to not have to manually manage that at the user level though.

Some kind of bespoke torrent client that managed it under the hood could probably work without having to invent your own peer-to-peer protocol for it. I wonder how long it would take to compute the torrent hash values for 100PB of data? :D

[–] uis@lemm.ee 1 points 5 months ago* (last edited 5 months ago)

~300MB/s on one core of 13-years old i5 SHA-256(used in BitTorrent v2). Newer cores can about half a gig per one. Less than 3 days on one core then. Less than day on 3 cores.*

* assuming no additional performance penalty for increased power consumption and memory bandwith usage

My guess storage bandwidth would be biggest bottleneck.

Found relatively old article(in Russian, just search for openssl and look at graph that mentions SHA-512 which is SHA-2 too) that says i7-2500 all-cores throughput is slightly over 1GB/s.

load more comments (2 replies)
load more comments (9 replies)