this post was submitted on 21 Jun 2025
-14 points (30.6% liked)
Technology
71754 readers
3634 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is the point where I stopped reading.
Yes, the author of this article should worry about AI, because AI is indeed quite effective in writing nonsense articles like this one. But AI is nowhere near replacing the real specialists. And it isn't the question of quantity, it is a principal question of how modern "AIs" work. While those principles won't change, AIs won't be able to do any job that involves logic and stable repeated results.
ironically, replacing shitty clickbait journalists is something AI can and will likely do in the near future.
It can complete coding tasks. But that’s not the same as replacing a developer. In the same way that cutting wood doesn’t make me a carpenter and soldering a wire doesn’t make me an electrician. I wish the AI crowd understood that.
It can complete coding tasks, but not well AND unsupervised. To get it to do something well I need to tell it what it did wrong over 4 or 5 iterations.
This is close to my experience for a lot of tasks, but unless I’m working in a tech stack I’m unfamiliar with, I find doing it myself leads to not just better results, but faster, too. Problem is it makes you have to work harder to learn new areas, and management thinks it’s faster for everything and
I think it's still faster for a lot of things. If you have several different ideas for how to approach a problem the robot can POC them very quickly to help you decide which to use. And while doing that it'll probably mention something that'll give you ideas for another couple approaches. So you can come up with an optimal solution in about the same time as it'd take to clack out a single POC by hand.
Yeah, I was thinking about production code when I wrote that. Usually I can get something working faster that way, and for tests it can speed things up, too. But the code is so terrible in general
Edit: production isn’t exactly what I was thinking. Just like. Up to some standards above just working
Yep. I write code almost entirely with a. I now for my OWN projects.
The amount of iteration and editing it requires almost requires a new specialty dev called "A. I developer support. ".
It's honestly kinda awful. I've been trying to use it a bit to help speed up some of my projects at work, and it's a crapshoot how well it helps. Some days I can give it the function I'm writing with an explanation of purpose and error output and it helps me fix it in 5 minutes. Other days I spend an hour endlessly iterating through asinine replies that get me no where (like when I tried to use it to help figure out a bit very well documented API, had it correct me and use a different method/endpoint until it gave up and went back to my way that didn't even work! I ended up just hacking together a workaround that got it done in the most annoying way possible, but it accomplished the task so WTFE)
A nice "trick": After 4 or so responses where you can't get anywhere, start a new chat without the wrong context. Of course refine your question with whatever you have found out in the previous chat.
80000 hours are the same cultists from lesswrong/EA that believe singularity any time now and they're also the core of people trying to build their imagined machine god in openai and anthropic
it's all very much expected. verbose nonsense is their speciality and they did that way before time when chatbots were a thing