this post was submitted on 22 May 2024
348 points (91.4% liked)

Technology

59589 readers
2962 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] maegul@lemmy.ml 248 points 6 months ago (22 children)

The moment word was that Reddit (and now Stackoverflow) were tightening APIs to then sell our conversations to AI was when the game was given away. And I'm sure there were moments or clues before that.

This was when the "you're the product if its free" arrangement metastasised into "you're a data farming serf for a feudal digital overlord whether you pay or not".

Google search transitioning from Good search engine for the internet -> Bad search engine serving SEO crap and ads -> Just use our AI and forget about the internet is more of the same. That their search engine is dominated by SEO and Ads is part of it ... the internet, IE other people's content isn't valuable any more, not with any sovereignty or dignity, least of all the kind envisioned in the ideals of the internet.

The goal now is to be the new internet, where you can bet your ass that there will not be any Tim Berners-Lee open sourcing this. Instead, the internet that we all made is now a feudal landscape on which we all technically "live" and in which we all technically produce content, but which is now all owned, governed and consumed by big tech for their own profits.


I recall back around the start of YouTube, which IIRC was the first hype moment for the internet after the dotcom crash, there was talk about what structures would emerge on the internet ... whether new structures would be created or whether older economic structures would impose themselves and colonise the space. I wasn't thinking too hard at the time, but it seemed intuitive to that older structures would at least try very hard to impose themselves.

But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.

[–] Hoxton@lemmy.world 21 points 6 months ago (4 children)

Well said! I’m still wondering what happens when the enviable ouroboros of AI content referencing AI content referencing AI content makes the whole internet a self perpetuating mess of unreadable content and makes anything of value these companies once gained basically useless.

Would that eventually result in fresh, actual human created content only coming from social media? I guess clauses about using your likeness will be popping up in TikTok at some point (if they aren’t already)

[–] maegul@lemmy.ml 10 points 6 months ago* (last edited 6 months ago) (2 children)

I dunno, my feeling is that even if the hype dies down we’re not going back. Like a real transition has happened just like when Facebook took off.

Humans will still be in the loop through their prompts and various other bits and pieces and platforms (Reddit is still huge) … while we may just adjust to the new standard in the same way that many reported an inability to do deep reading after becoming regular internet users.

[–] gh0stcassette@lemmy.blahaj.zone 8 points 6 months ago (2 children)

I think it'll end up like Facebook (the social media platform, not the company). Eventually you'll hit model collapse for new models trained off uncurated internet data once a critical portion of all online posts are made by AI, and it'll become Much more expensive to create quality, up-to-date datasets for new models. Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda and everyone else will move to newer, less enshittified platforms until the cycle repeats.

Maybe we'll see an increase in discord/matrix style chatroom type social media, since it's easier to curate those and be relatively confident everyone in a particular server is human. I also think most current fediverse platforms are also marginally more resistant to AI bots because individual servers can have an application process that verifies your humanity, and then defederate from instances that don't do that.

Basically anything that can segment the Unceasing Firehose of traffic on the big social media platforms into smaller chunks that can be more effectively moderated, ideally by volunteers because a large tech company would probably just automate moderation and then you're back at square 1.

[–] Hoxton@lemmy.world 3 points 6 months ago

Honestly, that sounds like the most realistic outcome. If the history of the internet is anything to go by, the bubble will reach critical mass and not so much pop, as slowly deflate when something else begins to grow and take its place of hype.

[–] maegul@lemmy.ml 1 points 6 months ago (1 children)

Great take.

Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda

Ooof ... great way of putting it ... "brain melting AI propaganda" ... I can almost see a sci-fi short film premised on this image ... with the main scene being when a normal-ish person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they've become.

Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human.

Yep. This is a pretty vital project in the social media space right now that, IMO, isn't getting enough attention, in part I suspect because a lot of the current movements in alternative social media are driven by millennials and X-gen nostalgic for the internet of 2014 without wanting to make something new. And so the idea of an AI-protected space doesn't really register in their minds. The problems they're solving are platform dominance, moderation and lock-in.

Worthwhile, but in all serious about 10 years too late and after the damage has been done (surely our society would be different if social media didn't go down the path it did from 2010 onward). Now what's likely at stake is the enshitification or en-slop-ification (slop = unwanted AI generated garbage) of internet content and the obscuring of quality human-made content, especially those from niche interests. Algorithms started this, which alt-social are combating, which is great.

But good community building platforms with strong privacy or "enclosing" and AI/Bot protecting mechanisms are needed now. Unfortunately, all of these clones of big-social platforms (lemmy included) are not optimised for community building and fostering. In fact, I'm not sure I see community hosting as a quality in any social media platforms at the moment apart from discord, which says a lot I think. Lemmy's private and local only communities (on the roadmap apparently) is a start, but still only a modification of the reddit model.

[–] afraid_of_zombies@lemmy.world 2 points 6 months ago (1 children)

person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.

I see you have met my Fox News watching parents.

[–] maegul@lemmy.ml 0 points 6 months ago

LOL (I haven't actually met someone like that, in part because I'm not a USian and generally not subject to that sort of type ATM ... but I am morbidly curious TBH.

[–] Hoxton@lemmy.world 2 points 6 months ago

You’re absolutely right about not going back. Web 3.0 I guess. I want to be optimistic that a distinction between all the garbage and actual useful or real information will be visible to people, but like you said, general tech and media literacy isn’t encouraging, hey?

Slightly related, but I’ve actually noticed a government awareness campaign where I live about identifying digital scams. Be nice if that could be extended to incorrect or misleading AI content too.

[–] assassin_aragorn@lemmy.world 6 points 6 months ago (2 children)

It should end up self regulating once AI is using AI material. That's the downfall of the companies not bothering to put very clear identification of AI produced material. It'll spiral into a hilarious mess.

[–] Hoxton@lemmy.world 4 points 6 months ago

I’m legit looking forward to when Google returns completely garbled and unreadable search results, because someone is running an automated Ads campaign that sources another automated campaign and so on, with the only reason it rises to the top is that they put the highest bid amount.

I doubt Google will do shit about it, but at least the memes will be good!

[–] afraid_of_zombies@lemmy.world 1 points 6 months ago

Hasn't it already happened? All culture is derivative, yes all of it. And look at how much of it is awful, yet we navigate fine. I keep hearing stats like every one second YouTube gets 4 hours more content and yet I use YouTube daily. Despite being very very confident that all but a fraction of a percent of what it has is of any value to me.

Same for books, magazines, news, podcasts, radio programs, music, art, comics, recipes, articles....

We already live in the post information explosion. Where the same stuff gets churned over and over again. All I am seeing AI doing is speeding this up. Now instead of a million YouTube vids I won't watch getting added next week it will be ten million.

[–] afraid_of_zombies@lemmy.world 3 points 6 months ago

Tik Tok was banned so it ain't coming from there. Can't get universal healthcare but we can make sure to protect kids from the latest dance craze.

[–] GnomeKat@lemmy.blahaj.zone 2 points 6 months ago

Thats a technical issue that likely can be solved. I doubt some feedback loop of training data will be the downfall of AI.. The way to stop it is to refuse to use it( lets be real the regulators arnt gana do shit)

load more comments (17 replies)