this post was submitted on 27 Jul 2025
285 points (91.1% liked)
Technology
73297 readers
3645 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's called testing, and the companies behind these LLMs should, before launch, put a very important amount of their resources into testing.
"Product testing is a crucial process in product development where a product's functionality, performance, safety, and user experience are evaluated to identify potential issues and ensure it meets quality standards before release" (Gemini)
We are literally using alpha/beta software to deal with life altering issues, and these companies are, for some reason, being able to test their products on the public, without consequences.
Can you think of any other industry where the mass adoption of a product is untested? Like image airlines adding a new autopilot system that allows a single crew flight - but it’s been untested. Or an electrical appliance that is sold without being tested for shock hazards?
Similar with AI - they already tell us they don’t know exactly how it all works (the black box) - yet are content to unleash it on the masses and see what happens. The social and personal effects this will have are being studied already and it’s not looking great.
It’s not even labelled as a beta test either.
That they don't know how it works is a lie. The mysticism and anthropomorphization is purposeful marketing. Pretending they don't know how it works also lets them pretend that the fact they constantly lie is something that can be fixed rather than a fundamental aspect of the technology
This has sadly been the norm in the tech industry for at least a decade now. The whole eco-system had become so accustomed to quick injections of investment cash, that products/businesses no longer grow organically but instead hit the scene in one huge developing and marketing blitz.
Consider companies like Uber or AirBnB. Their goal was never to make a safe, stable, or even legal product. Their goal was always to be first. Command the largest user base possible in the shortest time possible, then worry about all the details later. Both of those products have had disastrous effects on existing businesses and communities while operating in anti-competetive ways and flaunting existing laws, but so what? They're popular! Tens of millions of people already use them, and by the time government regulation catches up with that they're doing it's already too late. What politician would be brave enough to try and ban a company like Uber? What regulator still has enough power to reign in a company the size of AirBnB?
OpenAI is playing the same game. They don't care if their product is safe — hell, they don't even really care if it's useful, or profitable. They just want to be ubiquitous, because once they achieve that, the rest doesn't matter.
"Our AI is the most powerful, useful, knowledgeable and trustworthy system out there, it can be the cornerstone of modern society... unless you use it wrong. In which case it is corrupted trash."
It's like you bought a car and deliberately hit the wall to make a headline "cars make you disabled". Or bought a hammer, hit your thumb and blame hammers for this.
Guys, it's a TOOL. Every tool is both useful and harmful. It's up to you how you use it.
Hammers have been perfected over millenia. Cars over a century, with regulations and testing for safety getting stricter by the year.
Car makers test exactly that, and for good measure since cars can and do crash!
What are you suggesting, that we buy cars that didn't pass crash tests?
To me it seems like you arguing something similar for AI.
Are you saying hammers should be thumb-hitting-proof?
To me, it seems like they are arguing that "testing" whether a hammer can smash your thumb doesn't actually provide any useful information on the safety of a hammer.
To me, it seems they are saying that Estwing makes a better hammer than Fischer-Price, even though the Fischer-Price hammer is far less likely to cause injury if you hit your thumb.
All this article says is that we shouldn't give a toddler a real hammer, and we shouldn't stuff a general purpose LLM like ChatGPT into a Tickle-Me-Elmo.
Last time I checked, no car actively encourages you to drive into a wall.
Have you noticed how we aren't getting articles about chatgpt providing the steps to build a bomb anymore? The point is that these companies are completely capable of doing something about it
The comoanies are completely capable of doing something, but this is not a competition in doing something. Plus, aiming for a PG13 world will have consequences far worse than a text generator doing exactly what it is asked.
But ChatGPT told me to!
I think the headline would be "Illegal, Non-Safety Tested Car Disables Driver in Crash"