this post was submitted on 10 Aug 2025
795 points (99.3% liked)

Technology

75494 readers
2383 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Ghostalmedia@lemmy.world -5 points 1 month ago (2 children)

people don’t really like ai

Once you start asking about AI in regard to specific use cases, I think you’ll find that quickly changes.

My company and I have been running a lot of studies around how and where people find value in these tools, and a LOT of people find LLMs useful for copy writing, doing quick research, data visualization, synthesis, fast prototyping, etc.

There’s a lot of crap that AI is bad at in 2025. Especially the poor in-app integrations that everyone is trying to standup. But there are a lot of use cases where it does provide a lot of value for people.

[–] hitmyspot@aussie.zone 13 points 1 month ago (1 children)

Yes, it does, but at the price needed to make it profitable, it’s not desirable.

LLMs are not useless; they serve a purpose. They just are nowhere near as clever as we expect them to be based on calling them AI. However, body is investing billions for an email writing assistant.

[–] Korhaka@sopuli.xyz 2 points 1 month ago (2 children)

Price is essentially zero if you just run it locally

[–] hitmyspot@aussie.zone 7 points 1 month ago

Yes, but requires decent hardware and energy to do so. If the cost to host keeps dropping, people will self host and the ai companies won't make money. If the cost remains high, the subscriptions won't provide value and they won't make money.

[–] TechLich@lemmy.world 4 points 1 month ago

I dunno about that... Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you're looking at some very expensive hardware that also comes with a power bill.

Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.

Granted you don't need a whole datacenter, but the price is far from zero.

[–] mojofrododojo@lemmy.world 9 points 1 month ago (1 children)

oh yeah this shit's working out GREAT

https://lavocedinewyork.com/en/lifestyles/2025/06/29/when-the-machine-takes-over-the-mind-ais-terrifying-dark-side/

"This is what it must have felt like to be the first person to get addicted to a slot machine. We didn’t know then. But now we do.”

https://archive.is/Tv4Rr

Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. Lawrence’s use of the equivalent of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back.

[–] Ghostalmedia@lemmy.world 10 points 1 month ago (1 children)

All I’m saying is that is you ask people about AI with no use case, you’re going to get different answers than if you ask people about AI when it’s contextualized to a specific problem space.

If I ask a bunch of people about “what do you think about automobiles,” I’m going to get a very different answer than if I ask “what do you think about automobiles that are used as ambulances” or “what do you think about automobiles instead of mass transit.”

Context will give you a very different response.

[–] mojofrododojo@lemmy.world 5 points 1 month ago (1 children)

I just hope your insurance is paid up because the liabilities these things expose business to is frankly disgusting. but if I were a young lawyer, hell, this is going to be a huge domain to profit from - llm induced madness and psychosis, yeah, but also - LLM just made up shit because it didn't know. and the rate of this happening only seems to grow, while the severity of the risk involved is frankly terrifying.

[–] Ghostalmedia@lemmy.world 7 points 1 month ago (1 children)

Once again, it all depends on the use case. The other day I used an LLM quickly mockup a carousel UI so I could see if it was worth writing real code for. It helped me explore a couple bad ideas before I committed to something worth coding.

I’m not actually checking that code in. I’m using the LLM like a whiteboard on steroids.

[–] mojofrododojo@lemmy.world 2 points 1 month ago (2 children)

you're using an LLM for the purposes an actual whiteboard would probably be better for.

I mean, you could actually interact with people, yikes. you could have the give and take of ideas and collaboration, but instead, let's just chew through a shit ton of power and water, we've got a spare environment in the closet.

pfft, do you have any idea how silly it all seems from another perspective?

[–] yes_this_time@lemmy.world 3 points 1 month ago (1 children)

Some people are finding value in LLMs, that doesn't mean LLMs are great at everything.

Some people have work to do, and this is a tool that helps them do their work.

[–] mojofrododojo@lemmy.world 0 points 1 month ago (1 children)

they have no idea if what they're paying is what it actually costs though, so good luck building tools for the future when the resources are artificially priced.

[–] yes_this_time@lemmy.world 0 points 1 month ago (1 children)

I mean, I agree that a lot of money was spent training some of these models - and I personally wouldn't invest in an ai based company. The economics dont make sense.

However, worst case, self hosted open source models have got pretty good, and I find it unlikely that progress will simply stop. Diminishing returns from scaling data yes, but there will still be optimizations all through the pipeline.

That is to say, LLMs will continue to have utility regardless if Open AI and Anthropic are around long term.

[–] mojofrododojo@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

However, worst case

worst case the self hosted ai still has to be trained on stolen corpus.

worst case the self hosted ai still has ridiculous consumption of resources.

there's a bunch much much worse cases than your worst case even touches.

you don't even consider how it's built on theft at all anymore do you?

fucking ai bros

edit: whiney pouty downvotes won't change the fact that you've been taken by a cult of morons lol

[–] Ghostalmedia@lemmy.world 2 points 1 month ago

The point of a prototype is collaboration. It’s to get feedback from colleagues and end users.

Previously we’d whiteboard that out, spend a few days writing some code or stitching together a figma prototype to achieve a similar results.

I feel ya on the energy use, but don’t see how this is going to get me sued or isn’t allowing me to collaborate. The prototype code is going to get burned anyway, and now I my coworkers and I can pressure test ideas instantly with higher fidelity than before.