this post was submitted on 17 Oct 2025
1101 points (98.6% liked)

Technology

76171 readers
2486 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] GeneralEmergency@lemmy.world 3 points 8 hours ago

Surly it can't be because of the decline in quality because of deposit admins defending their own personal fiefdoms.

[–] coffee_nutcase207@lemmy.world 15 points 14 hours ago

That is too bad. Wikipedia is important.

[–] r0ertel@lemmy.world 10 points 17 hours ago (2 children)

This will be unpopular, but hear me out. Maybe the decline in visitors is only a decline in the folks who are simply looking for a specific word or name and the forgot. Like, that one guy who believed in the survival of the fittest. Um. Let me try to remember. I think he had an epic beard. Ah! Darwin! I just needed a reminder, I didn't want to read the entire article on him because I did that years ago.

Look at your own behaviors on lemmy. How often do you click/tap through to the complete article? What if it's just a headline? What if it's the whole article pasted into the body of the post? Click bait headlines are almost universally hated, but it's a desperate attempt to drive traffic to the site. Sometimes all you need is the article synopsis. Soccer team A beats team B in overtime. Great, that's all I need to know..unless I have a fantasy team.

[–] i_stole_ur_taco@lemmy.ca 3 points 9 hours ago (1 children)

Half my visits to Wikipedia are because I need to copy and paste a Unicode character and that’s always the highest search result with a page I can easily copy and paste the exact character from.

[–] Scrollone@feddit.it 3 points 9 hours ago

Em dash? Wikipedia.

Nice-looking quotes? Wikipedia.

Accented uppercase letters? Wikipedia.

(Yeah, I know. The last one can only be understood by Italian speakers; or speakers of other languages with stupid keyboard layouts)

[–] Kissaki@feddit.org 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

If you don't check their name - Darwin - on Wikipedia, where do you check it? A random AI? When you're on Facebook, their AI? When you're on Reddit, their AI? How trustworthy are they? What does that mean for general user behavior in the short and long term?

When you're satisfied with a soccer match score from a headline, fair enough. Which headline do you refer to, though? Who provides it? Who ensures it is correct?

Wikipedia is an established and good source for many things.

The point is that people get their information elsewhere now. Where it may be incomplete, wrong, or maliciously misrepresenting or lying. Where discovering more related information is even further away. Instead of the next paragraph or a scroll or index nav list jump away, no hyperlink, no information.

Personally, I regularly explore and verify sources.

I doubt most of those visits to Wikipedia were as shallow as finding just one name or term. Maybe one piece of information. Which may already go deeper than shallow term finding, and cross references and notes may spark interests or relevant concerns.

[–] Petter1@discuss.tchncs.de 1 points 8 hours ago

I think that you did not understand OC correctly…

What OC is talking about, is that the person searching for the lost word is verification enough. Meaning as soon as the word is seen, the remember is triggered to where the searching person knows the information already.

[–] SpaceCowboy@lemmy.ca 50 points 1 day ago (2 children)

If this AI stuff weren't a bubble and the companies dumping billions into it were capable of any long term planning they'd call up wikipedia and say "how much do you need? we'll write you a cheque"

They're trying to figure out nefarious ways of getting data from people and wikipedia literally has people doing work to try to create high quality data for a relatively small amount of money that's very valuable to these AI companies.

But nah, they'll just shove AI into everything blow the equivalent of Wikipedia's annual budget in a week on just electricity to shove unwanted AI slop into people's faces.

[–] nova_ad_vitum@lemmy.ca 15 points 1 day ago

But nah, they'll just shove AI into everything blow the equivalent of Wikipedia's annual budget in a week on just electricity to shove unwanted AI slop into people's faces.

You're off my several order of magnitude unfortunately. Tech giants are spending the equivalent of the entire fucking Apollo program on various AI investments every year at this point.

[–] Suffa@lemmy.wtf 19 points 1 day ago (1 children)

Because they already ate through every piece of content on wikipedia years and years ago. They're at the stage where they've trawled nearly the entire internet and are running out of content to find.

[–] fishy@lemmy.today 14 points 1 day ago

So now the AI trawls other AI slop, so it's essentially getting inbred. So they literally need you to subscribe to their AI slop so they can get new data directly from you because we're still nowhere near AGI.

[–] utopiah@lemmy.world 44 points 1 day ago (3 children)

(pasting a Mastodon post I wrote few days ago on StackOverflow but IMHO applies to Wikipedia too)

"AI, as in the current LLM hype, is not just pointless but rather harmful epistemologically speaking.

It's a big word so let me unpack the idea with 1 example :

  • StackOverflow, or SO for shot.

So SO is cratering in popularity. Maybe it's related to LLM craze, maybe not but in practice, less and less people is using SO.

SO is basically a software developer social network that goes like this :

  • hey I have this problem, I tried this and it didn't work, what can I do?
  • well (sometimes condescendingly) it works like this so that worked for me and here is why

then people discuss via comments, answers, vote, etc until, hopefully the most appropriate (which does not mean "correct") answer rises to the top.

The next person with the same, or similar enough, problem gets to try right away what might work.

SO is very efficient in that sense but sometimes the tone itself can be negative, even toxic.

Sometimes the person asking did not bother search much, sometimes they clearly have no grasp of the problem, so replies can be terse, if not worst.

Yet the content itself is often correct in the sense that it does solve the problem.

So SO in a way is the pinnacle of "technically right" yet being an ass about it.

Meanwhile what if you could get roughly the same mapping between a problem and its solution but in a nice, even sycophantic, matter?

Of course the switch will happen.

That's nice, right?.. right?!

It is. For a bit.

It's actually REALLY nice.

Until the "thing" you "discuss" with maybe KPI is keeping you engaged (as its owner get paid per interaction) regardless of how usable (let's not even say true or correct) its answer is.

That's a deep problem because that thing does not learn.

It has no learning capability. It's not just "a bit slow" or "dumb" but rather it does not learn, at all.

It gets updated with a new dataset, fine tuned, etc... but there is no action that leads to invalidation of a hypothesis generated a novel one that then ... setup a safe environment to test within (that's basically what learning is).

So... you sit there until the LLM gets updated but... with that? Now that less and less people bother updating your source (namely SO) how is your "thing" going to lean, sorry to get updated, without new contributions?

Now if we step back not at the individual level but at the collective level we can see how short-termist the whole endeavor is.

Yes, it might help some, even a lot, of people to "vile code" sorry I mean "vibe code", their way out of a problem, but if :

  • they, the individual
  • it, the model
  • we, society, do not contribute back to the dataset to upgrade from...

well I guess we are going faster right now, for some, but overall we will inexorably slow down.

So yes epistemologically we are slowing down, if not worst.

Anyway, I'm back on SO, trying to actually understand a problem. Trying to actually learn from my "bad" situation and rather than randomly try the statistically most likely solution, genuinely understand WHY I got there in the first place.

I'll share my answer back on SO hoping to help other.

Don't just "use" a tool, think, genuinely, it's not just fun, it's also liberating.

Literally.

Don't give away your autonomy for a quick fix, you'll get stuck."

originally on https://mastodon.pirateparty.be/@utopiah/115315866570543792

[–] amzd@lemmy.world 10 points 1 day ago

Most importantly, the pipeline from finding a question on SO that you also have, to answering that question after doing some more research is now completely derailed because if you ask an AI a question and it doesn’t have a good answer you have no way to contribute your eventual solution to the problem.

[–] ThirdConsul@lemmy.ml 13 points 1 day ago* (last edited 1 day ago) (1 children)

I honestly think that LLM will result in no progress made ever in computer science.

Most past inventions and improvements were made because of necessity of how sucky computers are and how unpleasant it is to work with them (we call it "abstraction layers"). And it was mostly done on company's dime.

Now companies will prefer to produce slop (even more) because it will hope to automate slop production.

[–] I3lackshirts94@lemmy.world 9 points 1 day ago

As an expert in my engineering field I would agree. LLMs has been a great tool for my job in being better at technical writing or getting over the hump of coding something every now and then. That’s where I see the future for ChatGPT/AI LLMs; providing a tool that can help people broaden their skills.

There is no future for the expertise in fields and the depth of understanding that would be required to make progress in any field unless specifically trained and guided. I do not trust it with anything that is highly advanced or technical as I feel I start to teach it.

load more comments (1 replies)
[–] ChaoticEntropy@feddit.uk 36 points 1 day ago* (last edited 1 day ago) (7 children)

AI will inevitably kill all the sources of actual information. Then all we're going to be left with is the fuzzy learned version of information plus a heap of hallucinations.

What a time to be alive.

load more comments (7 replies)
[–] kazerniel@lemmy.world 28 points 1 day ago (1 children)

“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”

I understand the donors aspect, but I don't think anyone who is satisfied with AI slop would bother to improve wiki articles anyway.

[–] drspawndisaster@sh.itjust.works 25 points 1 day ago (4 children)

The idea that there's a certain type of person that's immune to a social tide is not very sound, in my opinion. If more people use genAI, they may teach people who could have been editors in later years to use genAI instead.

load more comments (4 replies)
[–] badbytes@lemmy.world 255 points 2 days ago (23 children)

Wikipedia, is becoming one of few places I trust the information.

[–] SatansMaggotyCumFart@piefed.world 107 points 2 days ago (39 children)

It’s funny that MAGA and ml tankies both think that Wikipedia is the devil.

[–] OsrsNeedsF2P@lemmy.ml 142 points 2 days ago* (last edited 2 hours ago) (7 children)

There's a lot of problems with Wikipedia, but in my years editing there (I'm extended protected rank), I've come to terms that it's about as good as it can be.

In all but one edit war, the better sourced team came out on top. Source quality discussion is also quite good. There's a problem with positive/negative tone in articles, and sometimes articles get away with bad sourcing before someone can correct it, but this is about as good as any information hub can get.

[–] brbposting@sh.itjust.works 69 points 2 days ago

Thank you for your service 🫡

load more comments (6 replies)
load more comments (38 replies)
load more comments (22 replies)
[–] Treczoks@lemmy.world 25 points 1 day ago

Not me. I value Wikipedia content over AI slop.

[–] RedWheelbarrow@lemmy.world 41 points 1 day ago (16 children)

I guess I'm a bit old school, I still love Wikipedia.

load more comments (16 replies)
[–] Mrkawfee@feddit.uk 22 points 1 day ago* (last edited 1 day ago) (3 children)

I asked a chatbot scenarios for AI wiping out humanity and the most believable one is where it makes humans so dependent and infantilized on it that we just eventually stop reproducing and die out.

load more comments (3 replies)
[–] llama@lemmy.zip 11 points 1 day ago (2 children)

Yet I still have to go to the page for the episode lists of my favorite TV shows because every time I ask AI which ones to watch it starts making up episodes that either don't exist or it gives me the wrong number.

[–] Kissaki@feddit.org 1 points 9 hours ago

Sounds like it wants you to ask about it and then wants to write fan fiction for you.

[–] Scrollone@feddit.it 1 points 9 hours ago

Let's all repeat: LLMs don't know any facts. They're just a thesaurus on steroids.

load more comments
view more: next ›