this post was submitted on 21 May 2024
63 points (95.7% liked)

Fediverse

28465 readers
455 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] h3ndrik@feddit.de 14 points 6 months ago* (last edited 6 months ago) (19 children)

That's a nice idea but has some pretty obvious technical drawbacks that aren't discussed in the blog article:

The complexity of most networks grows about exponentially with the number of connections between the entities. It gets immensely more computationally expensive that way and you're bound to use lots of additional network traffic and total cpu power that way.

And some (a lot of) people like using social media on their phones instead of a computer. You're bound to drain their batteries real fast by moving application logic there.

Other than that I like the general idea. The Fediverse should be more dynamic. Caching and discovery have some big issues in the current form. That should be tackled and we need technical solutions for that. And the current architecture isn't perfect at all.

Furthermore, if talking about the edge where networks are smarter... Why then move it into the browser which isn't at the edge? Wouldn't that be an argument to invent edge-routers like in edge computing? I mean with c2s you have a server on the one side and a client on the other side with the edge somewhere in between. If you now flip it you end up in a different situation. But there's still nothing at the edge where you could introduce some smarts...

[–] rglullis@communick.news 2 points 6 months ago (18 children)

And some (a lot of) people like using social media on their phones instead of a computer. You’re bound to drain their batteries real fast by moving application logic there.

Messaging applications (that need to be online all the time) don't have this issue. Mobile email clients are even more conservative in resource usage. Why would an AP client be any different?

You are not going to be transcoding video or executing complex machine learning analysis on the device. I can reasonably argue that a local-first AcvitityPub application would be no different in resource usage than something like a modern XMPP or Matrix client.

[–] h3ndrik@feddit.de 4 points 6 months ago* (last edited 6 months ago) (1 children)

Because with all of that, messaging, email, xmpp, matrix and ActivityPub most of the magic happens on the server. Take email for example. The server takes care to be online 24/7. It provides like 5GB of storage for your inbox that you can access from everywhere. It filters messages and does database stuff so you can habe full text search. Same with messaging. Your server coordinates with like 200 other servers so messages from users from anywhere get forwarded to you. It keeps everything in sync. Caches images so they're available immediately.

That allows for the clients/Apps to be very simplistic. It just needs to maintain one connection to your server and ask if there's anything new every now and then. Or query new data/content. Everything else is already taken care for by the server.

OP's suggestion is to change that. Move logic into the client/App. But it's not super easy. If you now need to be concerned on the client with maintaining the 200 connections at all times instead of just 1 to see if anyone replied... Your phone might drain 200 times as much battery. And requiring the phone to be reachable also comes with a severe penalty. Phones have elaborate mechanisms to save power and sleep most of the time. Any additional network activity requires the processor and the modem to stay active for longer periods of time. And apart from the screen thats one of the major things that draws power.

[–] rglullis@communick.news 3 points 6 months ago (1 children)

What I am proposing is not getting rid of the server, just reducing the amount of functionality that depends on it. You won't be connecting with 200 different servers, you will still have only one single node responsible to get notifications.

Regarding storage: I can speak from experience that it you can have a local-first architecture for structured data that does not blow up the client. In a previous work, we built a messenger app where all client data was stored on PouchDB which could be synced via a "master" CouchDB. All client views would be built from the local data. Of course, media storage would go to the cloud, which means that the data itself was only highly-compressible text. You can go a looooong way with just a 1GB of storage, which is well within the limits of web storage

[–] h3ndrik@feddit.de 1 points 6 months ago* (last edited 6 months ago) (1 children)

Hmmh. But how would that then change Mastodon not displaying previous (uncached) posts? Or queries running through the server with it's perspective?

And I fail to grasp how hashtags and the Lemmy voting system is related to a client/server architecture... You could just implement a custom voting metric on the server. Sure you can also implement that five times in all the different apps. But you'd end up with the same functionality regardless of where you do the maths.

And if people are subscribed to like 50 different communities or watch the 'All' feed, there is a constant flow of ActivityPub messages all day long. Either you keep the phone running all day to handle that. Or you do away with any notification functionality. And replicating the database to the device either forces you to drain the battery all day, or you just sync when the user opens the App. But opening Lemmy and it takes a minute to sync the database before new posts appear, also isn't a great user experience.

I'd say we need nomadic identity, more customizability with the options like hashtags, filters and voting. Dynamic caching because as of now Fediverse servers regularly get overwhelmed if a high profile person with lots if followers posts an image. But most of that needs to be handled by servers. Or we do a full-on P2P approach like with Nostr or other decentralized services. Or edge-computing.

I don't quite get where in between federated and decentralized (as in p2p) your approach would be. And if it'd inherit the drawbacks of both worlds or combine the individual advantages.

And ActivityPub isn't exactly an efficient protocol and neither are the server implementations. I think we could do way better with a more optimized, still federated protocol. Same with Matrix. That also provides me with a similar functinality my old XMPP server had, just with >10x the resource usage. And both are federated.

[–] rglullis@communick.news 2 points 6 months ago* (last edited 6 months ago) (1 children)

But how would that then change Mastodon not displaying previous (uncached) posts?

You default to push (messages that come through the server), and you fall back to pull (the client accessing a remote server) when you (your client) is interested in fetching data that you never seen.

And I fail to grasp how hashtags and the Lemmy voting system is related to a client/server architecture

hashtags, sorting and ranking methods, moderation policies, and pretty much everything aside from the events themselves are just ways to visualize/filter/aggregate the data according to the user's preferences. But it turns out that this is only "complex" when your data set is too large (which is bound to happen when your server has to serve lots of users). If you filter the data set at the client, its size becomes manageable.

we do a full-on P2P approach like with Nostr

Nostr is not p2p, and p2p is not what I am talking about. Having logic at the client does not mean "p2p".

XMPP server (has less resource usage and is) federated.

Yes, because the XMPP server is only concerned with passing messages around!

[–] h3ndrik@feddit.de 1 points 6 months ago* (last edited 6 months ago) (1 children)

Ah, you're right. Nostr uses relays. Now I know what the name stands for. Sounds a bit like your proposal in extreme. The "servers" get downgraded to relatively simple relays that just forward stuff. The magic happens completely(?) on the clients.

I'm still not sure about the application logic. Sure I also like the logic close to me (the user.) The current trend has been towards the opposite for quite some time. Sometimes the explanation is simple: If you do most things on the server, you retain control over what's happening. That's great for selling ads and controlling the platforms in general. On the other hand it also has some benefits for power efficiency on the devices. I'm not talking about computing stuff, but rather about something like Google Cloud Messaging which has the purpose of reducing the amount of open connections and power draw and combine everything into a single connection for push messages. In order to do decide when to wake a device, it has access to to the result of the filtering and message priorization. Which then needs to be done server-side.

I'm also not sure with the filtering of hashtags. I mean if you subscribe to a hashtag. Or want to count the sum to calculate a trend... Something needs to work through all the messages and filter/count them. Doesn't that mean you'd need all Mastodon's messages of the day on your device? I'm sure that's technically possible. Phones are fast little computers. And 4G/5G sometimes has good speed. But l'm not sure what kind of additional traffic you'd estimate. 50 Megabytes a day is 1.5GB for your monthly cellular data plan. A bit less because sometimes people are at home and use wifi... But then they also don't just use one platform, but have Matrix, Lemmy and Mastodon installed. And you can't just skip messages, you'd need to handle them all to calculate the correct number of upvotes and hashtag use. Even if the user doesn't open the app for a week.

I don't quite "feel it". But I also wouldn't rule out the possibility of something like a hybrid approach. Or some clever trickery to get around that for some of the things a social network is concerned with...

Or like something I'd attribute more to edge computing. The client makes all the decisions and tells the edge (router) exactly what algorithm to use to do the ranking, how to do the filtering and when it wants to be woken up... That device does the heavy lifting and caches stuff and forwards them in chunks as instructed by the client.

[–] rglullis@communick.news 2 points 6 months ago* (last edited 6 months ago)

Doesn’t that mean you’d need all Mastodon’s messages of the day on your device?

You wouldn't need that. Think in terms of XMPP: a server could create the equivalent of a MUC room for tags, and the client could "follow" a tag by joining the room. The server would then push all messages it receives to that room. This scales quite well and still puts the client in control of the logic.

Similar architecture could be used for groups.

load more comments (16 replies)
load more comments (16 replies)