this post was submitted on 23 Jan 2024
138 points (93.7% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] elrik@lemmy.world 6 points 10 months ago* (last edited 10 months ago) (1 children)

It's a really interesting question and I imagine scaling a distributed solution like that with commodity hardware and relatively high latency network connections would be problematic in several ways.

There are several orders of magnitude between the population of people who would participate in providing the service and those who would consume the service.

Those populations aren't local to each other. In other words, your search is likely global across such a network, especially given the size of the indexed data.

To put some rough numbers together for perspective, for search nearing Google's scale:

  • A single copy of a 100PB index would require 10,000 network participants each contributing 10TB of reliable and fast storage.

  • 100K searches / sec if evenly distributed and resolvable by a single node would be at least 10 req/sec/node. Realistically it's much higher than that, depending on how many copies of the index, how requests are routed, and how many nodes participate in a single query (probably on the order of hundreds). Of that 10TB of storage per node, substantial amounts of it would need to be kept in memory to sustain the likely hundreds of req/sec a node might see on average.

  • The index needs to be updated. Let's suppose the index is 1/10th the size of the crawled data and the oldest data is 30 days (which is pretty stale for popular sites). That's at least 33PB of data to crawl per day or roughly 3,000Gbps minimum sustained data ingestion. For those 10,000 nodes they would need 1Gbps of bandwidth to index fresh data.

These are all rough numbers but this is not something the vast majority of people would have the hardware and connection to support.

You'd also need many copies of this setup around the world for redundancy and lower latency. You'd also want to protect the network against DDoS, abuse and malicious network participants. You'll need some form of organizational oversight to support removal of certain data.

Probably the best way to support such a distributed system in an open manner would be to have universities and other public organizations run the hardware and support the network (at a non-trivial expense).

[–] UNWILLING_PARTICIPANT@sh.itjust.works 4 points 10 months ago (2 children)

So this is starting to sound more like something that needs to explicitly be paid for in some way (as opposed to just crowd sourcing personal hardware), at least if we want to maintain the same level of service.

[–] Gradually_Adjusting@lemmy.world 2 points 10 months ago

It seems like there are others in the thread with good options

[–] elrik@lemmy.world 1 points 10 months ago

Yes, at least currently. There may be better options as multi-gigabit internet access becomes more common place and commodity hardware gets faster.

The other options mentioned in this thread are basically toys in comparison (either obtaining results from existing search engines or operating at a scale less than a few terabytes).