this post was submitted on 15 Jul 2025
362 points (93.5% liked)
Fediverse
35549 readers
486 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why do people bring this up every fucking time?
"I used chatgpt"
I asked Gemini, and my browser crashed, so, idk, man I guess it's knowledge too powerful for human minds to contain.
How would you phrase this differently?
"It looks like this feature was added 5 years ago."
If asking for confirmation, just ask for confirmation.
So, your solution is for the user to provide less information and then respond to people to inform them if they used chatgpt if asked?
It just seems like much less reps are used if they say they used ChatGPT.
Additionally, if they don’t say it and no one asks, in the future people might look for a source, at least this way there is a warning there might be misinformation.
I know what your going to say next, they should research the thing themselves independently of ChatGPT, but honestly, they probably don’t care/have the time to look up released notes over the past few years.
Why would anyone ask where they got the info if it is accurate?
The point Is that it might not be accurate. It’s like saying, “a friend told me…”
It lets the reader know that the information being shared was presented as truthful, but wasn’t verified by the commenter themselves.
Apparently the feature was added 5 years ago.
So, your solution is for the user to provide less information and then respond to people to inform them if they used chatgpt if asked?
It just seems like much less reps are used if they say they used ChatGPT.
Additionally, if they don’t say it and no one asks, in the future people might look for a source, at least this way there is a warning there might be misinformation.
I know what your going to say next, they should research the thing themselves independently of ChatGPT, but honestly, they probably don’t care/have the time to look up released notes over the past few years.
My partner describes her bowel movements to me when she returns from her daily ablutions.
This is the golden age of misinformation and you are bitching about citations?
Because they know it's not accurate and explicitly mention it so you know where this information comes from.
Then why post it at all?
It makes idiots whine
Because they'd still like to know? it's generally expected to do some research on your own before asking other people, and inform them of what you've already tried
Asking ChatGPT isn’t research.
ChatGPT is a moderately useful tertiary source. Quoting Wikipedia isn't research, but using Wikipedia to find primary sources and reading those is a good faith effort. Likewise, asking ChatGPT in and of itself isn't research, but it can be a valid research aid if you use it to find relevant primary sources.
At least some editor will usually make sure Wikipedia is correct. There’s nobody ensuring chatGPT is correct.
Just using the "information" it regurgitates isn't very useful, which is why I didn't recommend doing that. Whether the information summarized by Wikipedia and ChatGPT is accurate really isn't important, you use those tools to find primary sources.
I’d argue that it’s very important, especially since more and more people are using it. Wikipedia is generally correct and people, myself included, edit incorrect things. ChatGPT is a black box and there’s no user feedback. It’s also stupid to waste resources to run an inefficient LLM that a regular search and a few minutes of time, along with like a bite of an apple worth of energy, could easily handle. After all that, you’re going to need to check all those sources chatGPT used anyways, so how much time is it really saving you? At least with Wikipedia I know other people have looked at the same things I’m looking at, and a small percentage of those people will actually correct errors.
Many people aren’t using it as a valid research aid like you point out, they’re just pasting directly out of it onto the internet. This is the use case I dislike the most.
From what I can tell, running an LLM isn't really all that energy intensive, it's the training that takes loads of energy. And it's not like regular searches don't use loads of energy to initially index web results.
And this also ignores the gap between having a question, and knowing how to search for the answer. You might not even know where to start. Maybe you can search a vague question, but you're essentially hoping that somewhere in the first few results is a relevant discussion to get you on the right path. GPT, I find, is more efficient for getting from vague questions to more directed queries.
I find this attitude much more troubling than responsible LLM use. You should not be trusting tertiary sources, no matter how good their track record, you should be checking the sources used by Wikipedia too. You should always be checking your sources.
That's beyond the scope of my argument, and not really much worse than pasting directly from any tertiary source.
AI seems to think it’s always right but in reality it is seldom correct.
People also say they googled, unfortunately
Not the same thing.
google allows for the possibility that the user was able to think critically about sources that a search returned
chapGPT is drunk uncle confidently stating a thing they heard third hand from Janet in accounting and then taking him at his word
Google results are like:
Is peertube compatible with the fediverse?
ADVERT
Introduction: A lot of people wonder if peertube works with other peertube instances....
ADVERT
What is peertube? Peertube was set up in 1989 by john Peer...
Pop-up: do you like our publication? Give us your email address.
ADVERT
Why you might want to set up peertube: peertube is a decentralised way....
ADVERT
Please support us! From £30 a month you can help us to write more.
Wat is the fediverse? The fediverse is a technology...
ADVERT
Articles you may also like:
ADVERT
So can peertube instances talk to each other?
ADVERT
the answer is yes.
ADVERT
In conclusion, peertube is very...
Comments (169)
John Smith wrote at 12:28 on Friday
At this point, ad blocker is pretty much mandatory for me, just like how antivirus software used to be a decade ago (probably more)
but at least your drunk uncle won't boil the oceans in the process too
noone boils the ocean with using chatgpt
one transatlantic flight produces the same amount of CO2 as 600000 ChatGPT requests; if you use Quen 2.5, you need to make nearly 2 mio. requests.
To set this in relation, transport only for Bezos wedding in Venice equals about 54000000 ChatGPT requests.
Using a LLM once in a while is negligible.
How dare you, my drunk uncle is completely capable of boiling the oceans! He was even boasting about it at our last family dinner!