this post was submitted on 16 Oct 2024
64 points (97.1% liked)

Fediverse

28480 readers
714 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

you are viewing a single comment's thread
view the rest of the comments
[–] simple@lemm.ee 25 points 1 month ago (7 children)

Not even the biggest tech companies have an answer sadly... There are bots everywhere and social media is failing to stop them. The only reason there aren't more bots in the Fediverse is because we're not a big enough target for them to care (though we do have occasional bot spam).

I guess the plan is to wait until there's an actual way to detect bots and deal with them.

[–] rglullis@communick.news 12 points 1 month ago (1 children)

Not even the biggest tech companies have an answer sadly…

They do have an answer: add friction. Add paywalls, require proof of identity, start using client-signed certificates which needs to be validated by a trusted party, etc.

Their problem is that these answers affect their bottom line.

I think (hope?) we actually get to the point where bots become so ubiquitous that the whole internet will become some type of Dark Forest and people will be forced to learn how to deal with technology properly.

[–] simple@lemm.ee 8 points 1 month ago (1 children)

Their problem is that these answers affect their bottom line.

It's more complicated than that. Adding friction and paywalls will quickly kill their userbase, requiring a proof of identity or tracking users is a privacy disaster and I'm sure many people (especially here) would outright refuse to give IDs to companies.

They're more like a compromise than a real solution. Even then, they're probably not foolproof and bots will still manage.

[–] rglullis@communick.news 4 points 1 month ago (2 children)

requiring a proof of identity or tracking users is a privacy disaster and I'm sure many people (especially here) would outright refuse to give IDs to companies.

The Blockchain/web3/Cypherpunk crowd already developed solutions for that. ZK-proofs allow you to confirm one's identity without having to reveal it to public and make it impossible to correlate with other proofs.

Add other things like reputation-based systems based on Web-Of-Trust, and we can go a long way to get rid of bots, or at least make them as harmless as email spam is nowadays.

[–] FaceDeer@fedia.io 4 points 1 month ago (1 children)

It's unfortunate that there's such a powerful knee-jerk prejudice against blockchain technology these days that perfectly good solutions are sitting right there in front of us but can't be used because they have an association with the dreaded scarlet letters "NFT."

[–] atrielienz@lemmy.world 3 points 1 month ago

I don't like or trust NFT's and honestly, I don't think anybody else should for the most part. I feel the same about a lot of new crypto. But I don't necessarily distrust blockchain because of that. I think it has its own set of problems, in that where the record is kept is important and therefore a target. We already have problems with leaks of PII. Any blockchain database that stores the data to ID people will be a target too.

[–] ericjmorey@discuss.online 3 points 1 month ago (1 children)

ZK-proofs

This is a solution in the same way that PGP-keys are a solution. There's a big gulf between the theory and implementation.

[–] rglullis@communick.news 1 points 1 month ago* (last edited 1 month ago)

Right, but the problem with them is "bad usability", which amounts to "friction".

Like I said in the original comment, I kinda believe that things will get so bad that we will eventually have to accept that the internet can only be used if we use these tools, and that "the market" starts focusing on building the tools to lower these barriers of entry, instead of having their profits coming from Surveillance Capitalism.

load more comments (5 replies)