this post was submitted on 11 Jun 2025
382 points (99.5% liked)

Technology

71437 readers
2429 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

A few important things to start with:

  1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
  2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
  3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

all 41 comments
sorted by: hot top controversial new old
[–] spankmonkey@lemmy.world 63 points 4 days ago (2 children)

Articles already have a summary at the top due to the page format, why was AI shoved into the process?

[–] cannedtuna@lemmy.world 32 points 4 days ago
[–] Mac@mander.xyz 12 points 4 days ago* (last edited 4 days ago) (1 children)

Grok please ELI5 this comment so i can understand it

[–] prex@aussie.zone 25 points 4 days ago* (last edited 4 days ago) (1 children)

I know your comment was /s bit I cant not repost this:

[–] Mac@mander.xyz 5 points 4 days ago

Hahaha i too have that saved. I love it so much.

[–] prole@lemmy.blahaj.zone 48 points 4 days ago

I can't wait until this "put LLMs in everything" phase is over.

[–] Ulrich@feddit.org 41 points 4 days ago (2 children)

So they:

  • Didn't ask editors/users
  • noticed loud and overwhelmingly negative feedback
  • "paused" the program

They still don't get it. There's very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what's even the point? It's not saving them any money.

Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.

[–] SilverShark@lemmy.world 5 points 4 days ago

I also think that generating blob summaries just goes towards brain rot things we see everywhere on the web that's just destroying people's attention spam. Wikipedia is kind of good to read something that is long enough and not just some quick, simplistic and brain rotting inducing blob

[–] ooo@sh.itjust.works 6 points 4 days ago (1 children)

If her list were straight talk:

  1. Were gonna make up shit
  2. But don’t worry we’ll manually label it what could go wrong
  3. Dang no one was fooled let’s figure out a different way to pollute everything with alternative facts
[–] benjhm@sopuli.xyz 17 points 4 days ago (1 children)

Since much (so-called) "AI" basic training data depends on Wikipedia, wouldn't this create a feedback loop that could quickly degenerate ?

[–] Petter1@lemm.ee 7 points 4 days ago (1 children)
[–] plyth@feddit.org 1 points 3 days ago

Only if the summary is included in the training data.

[–] SufferingSteve@feddit.nu 6 points 3 days ago (1 children)

Lol, the source data for all AI is starting to use AI to summarize.

Have you ever tried to zip a zipfile?

But then on the other hand, as compilers become better, they become more efficient at compiling their own source code...

[–] lennivelkant@discuss.tchncs.de 0 points 3 days ago (1 children)

Yeah but the compilers compile improved versions. Like, if you manually curated the summaries to be even better, then fed it to AI to produce a new summary you also curate... you'll end up with a carefully hand-trained LLM.

[–] SufferingSteve@feddit.nu 2 points 3 days ago (1 children)

So if the AI generated summaries are better than man made summaries, this would not be an issue would it?

If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.

[–] phoenixz@lemmy.ca 17 points 4 days ago

I passionately hate the corpo speech she's using. This fake list of "things she's done wrong but now she'll do them right, pinky promise!!" whilst completely ignoring the actual reason for the pushback they've received (which boils down to "fuck your AI, keep it out") is typical management behavior after they were caught trying to screw over the workers in some way.

We're going to screw you over one way or the other, we just should have communicated it better!

Basically this.

[–] SpicyLizards@reddthat.com 9 points 4 days ago

I don't see how AI could benefit wikipedia. Just the power consumption alone isn't worth it. Wiki is one of the rare AI free zones, which is a reason why it is good

[–] KnitWit@lemmy.world 8 points 4 days ago* (last edited 4 days ago)

I canceled my recurring over this about a week ago, explaining that this was the reason. One of their people sent me a lengthy response that I appreciated. Still going to wait a year before I reinstate it, hopefully they fully move on from this idea by then. It sounded a lot like this though, kinda wishy washy.

[–] sentient_loom@sh.itjust.works 6 points 4 days ago* (last edited 4 days ago)

Is there a way for us to complain to wikipedia about this? I contribute money every year, and I will 100% stop if they're stomping more LLM-slop down my throat.

Edit: You can contribute to the discussion in the link, and you can email them at addresses found here: https://wikimediafoundation.org/about/contact/

[–] DigDoug@lemmy.world 3 points 4 days ago (1 children)

If they thought this would be well-received they wouldn't have sprung it on people. The fact that they're only "pausing the launch of the experiment" means they're going to do it again once the backlash has subsided.

RIP Wikipedia, it was a fun 24 years.

[–] pastermil@sh.itjust.works 2 points 4 days ago (2 children)

Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

[–] DigDoug@lemmy.world 1 points 3 days ago

Me saying "RIP" was an attempt at hyperbole. That being said, shoehorning AI into something for which a big selling point is that it's user-made is a gigantic misstep - Maybe they'll listen to everybody, but given that they tried it at all, I can't see them properly backing down. Especially when it was worded as "pausing" the experiment.

[–] Richat@lemmy.ml -2 points 4 days ago* (last edited 4 days ago) (1 children)

the fact they're willing to listen to feedback, whatever their reason was, is a good sign

Oh you have so much to learn about companies fucking their users over if you think this is the end of them trying to shove AI into Wikipedia

[–] pastermil@sh.itjust.works 2 points 4 days ago

Then teach me daddy~

[–] OmegaLemmy@discuss.online 1 points 4 days ago* (last edited 4 days ago)

I don't think Wikipedia is for the benefit of users anymore, what even are the alternatives? Leftypedia? Definitely not Britannica

[–] Kusimulkku@lemm.ee 0 points 4 days ago

It does sound like it could be handy

[–] Fizz@lemmy.nz 0 points 4 days ago

Noo Wikipedia why would you do this

[–] count_dongulus@lemmy.world -3 points 4 days ago* (last edited 4 days ago) (3 children)

Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

[–] davidgro@lemmy.world 11 points 4 days ago (1 children)

Summaries that look good are something LLMs can do, but not summaries that actually have a higher ratio of important/unimportant than the source, nor ones that keep things accurate. That last one is super mandatory on something like an encyclopedia.

[–] prole@lemmy.blahaj.zone 0 points 4 days ago

The only application I've kind of liked so far has been the one on Amazon that summarizes the content of the reviews. Seems relatively accurate in general.

[–] sentient_loom@sh.itjust.works 7 points 4 days ago

If we need summaries, let's let a human being write the summaries. We are already experts at writing. We love doing it.

[–] propitiouspanda@lemmy.cafe 4 points 4 days ago

not forced behavior for end users.

This is what I'm constantly criticizing. It's fine to have more options, but they should be options and not mandatory.

No, having to scroll past an AI summary for every fucking article is not an 'option.' Having the option to hide it forever (or even better, opt-in), now that's a real option.

I'd really love to see the opt-in/opt-out data for AI. I guarantee businesses aren't including the option or recording data because they know it will show people don't want it, and they have to follow the data!