this post was submitted on 16 Apr 2026
131 points (95.8% liked)

Technology

83831 readers
3607 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] dream_weasel@sh.itjust.works 3 points 1 hour ago

There is a data point missing here.

Do the same study and give some an LLM, some no LLM, and some a type A subject matter expert for reference. It may also matter if this person is a friend coworker or random passerby, but I would be willing to bet money that the same effect is present to a lesser (but still statistically significant) degree.

Maybe a future study can be further refined to build some scaffolding for more effective teaching/learning "on the job" or in general.

[–] wuffah@lemmy.world 10 points 3 hours ago* (last edited 3 hours ago) (1 children)

The propensity of the average person to simply believe what they’re told is staggering, and I know because I do it all the time. It takes effort to seek out information, vet it, consider it, and then make a determination on the next information to seek or the next course of action. Deterministic, trustworthy information and abstracted concepts are extremely valuable to the brain, an organ that consumes roughly 20% of our body’s energy.

Before, computers performed tasks that were impossible for the human mind. Machine learning has been automating tasks impossible for humans such as computer vision or large dataset processing, but chatbots are the first technology that has really enabled automating human thought. In this new sense, directly offloading this cognitive work to a computer is literally letting it think for us.

The more reliant on this mode of thinking we become, the easier it is to transfer cognitively expensive work to a device that externalizes that energy cost. However, the trade-offs that are emerging are:

  • Internal electric brain energy is traded for relatively inefficient external electricity production to feed circuits.

  • The words generated by LLM’s must still be verified and combined into coherent, dependable ideas and actions.

  • The drive and skill required to develop good ideas that have value is degraded without constant practice.

In the end, it becomes only a slightly less amount of work to perform the same thinking process for checking and mentally processing the output of an LLM chatbot, which defeats its purpose. If you skip that step of contextualizing it as possibly representing corporate interest and diluting meaning while offering a juicy cognitive shortcut, you’re becoming willingly complacent in your own digital brainwashing. This effect is also emergent and automatic; it doesn’t even have to be of nefarious purpose, it seems to be a procedural consequence of this mode of thinking.

What I really fear, and what is also emerging, is that eventually AI agents will become so advanced and trusted that their end-to-end capabilities will make mistakes and ulterior motives impossible to spot, and that they will become completely above the capability and desire for human scrutiny.

These digital brains we trained on all of human knowledge are now in the process of training us.

[–] No1@aussie.zone 1 points 1 minute ago* (last edited 31 seconds ago)

The propensity of the average person to simply believe what they’re told is staggering,

Goddamit.

Now I don't know.if I should.believe you!

[–] baronvonj@piefed.social 44 points 5 hours ago

@grok is this true?

[–] TheFeatureCreature@lemmy.ca 20 points 4 hours ago

I'm glad there are official studies being done to document this, but also it's vary obvious if you've spent any time around people in the past few years. The degradation of critical thinking and research skills is highly tangible and disturbing. Any country that isn't addressing this significant intelligence gap is going to have an entire generation (or more) of brain-drained, unskilled citizens that can't meaningfully contribute to the national workforce. For western countries that have already surrendered most of their manufacturing and innovation overseas, this will be even more devastating.

[–] supernight52@lemmy.world 19 points 4 hours ago

Wow- who would have thought using a tool that actively takes away critical thinking from any reply it generates, and relying on that instead of engaging your brain would cause negative effects on mental health and dexterity?

[–] avidamoeba@lemmy.ca 9 points 4 hours ago* (last edited 4 hours ago)

People who used AI tools for hints and clarification had a much easier time once the chatbot was removed when compared to those who used the bot to essentially prompt the answers.

Probably important for people who want to get some of the benefits of AI without paying the heavier costs. This reminds me of how I used Wolfram Alpha understand solving integrals in multivariate calculus. I paid for subscription that allowed viewing the steps it made to reach a solution. That helped me understand how the different strategies get applied in integration.

[–] leoj@piefed.zip 14 points 5 hours ago

Makes the brand name Grok even more hilarious.

[–] Kowowow@lemmy.ca 6 points 5 hours ago

I would be interested in how bad a something like an internet and local file librarian or conversational text search engine(does that make sense?) would be compared to standard ai systems