this post was submitted on 21 Jun 2025
-14 points (30.6% liked)

Technology

71754 readers
3423 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] Lembot_0003@lemmy.zip 26 points 1 day ago (3 children)

AI can now complete real-world coding tasks

That is the point where I stopped reading.
Yes, the author of this article should worry about AI, because AI is indeed quite effective in writing nonsense articles like this one. But AI is nowhere near replacing the real specialists. And it isn't the question of quantity, it is a principal question of how modern "AIs" work. While those principles won't change, AIs won't be able to do any job that involves logic and stable repeated results.

[–] Nima@leminal.space 6 points 1 day ago

ironically, replacing shitty clickbait journalists is something AI can and will likely do in the near future.

[–] SMillerNL@lemmy.world 12 points 1 day ago (2 children)

It can complete coding tasks. But that’s not the same as replacing a developer. In the same way that cutting wood doesn’t make me a carpenter and soldering a wire doesn’t make me an electrician. I wish the AI crowd understood that.

[–] not_woody_shaw@lemmy.world 9 points 1 day ago (1 children)

It can complete coding tasks, but not well AND unsupervised. To get it to do something well I need to tell it what it did wrong over 4 or 5 iterations.

[–] Repelle@lemmy.world 2 points 1 day ago (1 children)

This is close to my experience for a lot of tasks, but unless I’m working in a tech stack I’m unfamiliar with, I find doing it myself leads to not just better results, but faster, too. Problem is it makes you have to work harder to learn new areas, and management thinks it’s faster for everything and

[–] not_woody_shaw@lemmy.world 1 points 1 day ago (1 children)

I think it's still faster for a lot of things. If you have several different ideas for how to approach a problem the robot can POC them very quickly to help you decide which to use. And while doing that it'll probably mention something that'll give you ideas for another couple approaches. So you can come up with an optimal solution in about the same time as it'd take to clack out a single POC by hand.

[–] Repelle@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Yeah, I was thinking about production code when I wrote that. Usually I can get something working faster that way, and for tests it can speed things up, too. But the code is so terrible in general

Edit: production isn’t exactly what I was thinking. Just like. Up to some standards above just working

[–] thedruid@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (1 children)

Yep. I write code almost entirely with a. I now for my OWN projects.

The amount of iteration and editing it requires almost requires a new specialty dev called "A. I developer support. ".

[–] Passerby6497@lemmy.world 1 points 1 day ago (1 children)

It's honestly kinda awful. I've been trying to use it a bit to help speed up some of my projects at work, and it's a crapshoot how well it helps. Some days I can give it the function I'm writing with an explanation of purpose and error output and it helps me fix it in 5 minutes. Other days I spend an hour endlessly iterating through asinine replies that get me no where (like when I tried to use it to help figure out a bit very well documented API, had it correct me and use a different method/endpoint until it gave up and went back to my way that didn't even work! I ended up just hacking together a workaround that got it done in the most annoying way possible, but it accomplished the task so WTFE)

[–] rikudou@lemmings.world 1 points 22 hours ago

A nice "trick": After 4 or so responses where you can't get anywhere, start a new chat without the wrong context. Of course refine your question with whatever you have found out in the previous chat.

[–] fullsquare@awful.systems 4 points 1 day ago* (last edited 1 day ago)

80000 hours are the same cultists from lesswrong/EA that believe singularity any time now and they're also the core of people trying to build their imagined machine god in openai and anthropic

it's all very much expected. verbose nonsense is their speciality and they did that way before time when chatbots were a thing

[–] rikudou@lemmings.world 2 points 23 hours ago

I'm not even gonna read it, but the 3rd pyramid is hilarious. Go on executives, just do it! See how it goes.

[–] Opinionhaver@feddit.uk 10 points 1 day ago (1 children)

Working with your hands is a good way. I feel like online discussions often forget that people like this even exists.

[–] Deestan@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

I feel that this article is based on beliefs that are optimism rather than empiricism or rational extrapolation, and trains of thought driven way into highly simplified territory.

Basically like the Lesswrong, self-proclaimed "longtermists" and Zizians crowds.

Illustrative example: Categorizing nannies under "human touch strongly preferred - perhaps as a luxury". This assumes automation is not only possible to a degree way beyond what we see signs of, but that the service itself isn't inherently human.

[–] schmorpel@slrpnk.net 3 points 1 day ago

Huh, I wonder what wrote this stupid article on this not at all fishy fucking website. /s