this post was submitted on 18 Mar 2026
823 points (92.6% liked)

Technology

82830 readers
3191 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Altman’s remarks in his tweet drew an overwhelmingly negative reaction.

“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”

Others called him a “f***ing psychopath” and “scum.”

“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.

you are viewing a single comment's thread
view the rest of the comments
[–] MangoCats@feddit.it 7 points 22 hours ago (1 children)

Sam is still early, and obnoxious, but I've been monitoring AI progress since the 1980s. Roughly one year ago, AI coding agents sort of turned the corner from not really any more useful than a Google search (which is, itself very useful), into getting things right more than they hallucinate. That was an important watershed, because from that point they could make forward progress, fixing more mistakes than they made.

In the 12 months since, there has been steady and rapid forward progress. If you haven't asked an AI to code something for you in the last 3 months, you're out of touch with where it's at today.

Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.

[–] AnarchistArtificer@slrpnk.net 1 points 19 hours ago (2 children)

I personally don't use AI, but I concede that for some people, it can be useful for them, if they use the AI as a tool for their own thinking, rather than subordinating themselves to the chatbot. Mostly, this means ensuring that they're able to check whether the AI is right or not.

When I dabbled in using coding AI, there were a few basic tasks that it was useful for. There were a few hallucinations, but because the task was basic and well within my proficiency to scan, I was able to set it right; even with these corrections, it still saved me time overall. However, when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly. Things weren't working, so I felt sure that there must be some hallucinated errors, but I couldn't tell what they were because the task was at or beyond the limit of my own technical competency. A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting compared to how problem solving a code problem feels, and I felt dissatisfied by the lack of learning involved.

Ordinarily, struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn't this time. I guess I did get a little better at prompting the AI, but I felt like I learned far less than if I had solved the problem myself. Battling through to build a thorough understanding of my problem and my tools takes a long time upfront, but the next time I do this task or a similar one, I'll be quicker, and these time improvements will build and build as my proficiency continues to grow. That's why I stopped dabbling with AI coding assistants/agents — because even though using them for this complex task still saved me time compared to usual, in the long term, the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.

Now I hear what you're saying about how much more effective AI coding agents are becoming, and how the hallucination rate is lower than it was. I haven't had much first hand experience for quite a few months now, but I have no doubt that I would be incredibly impressed at the progress in such a relatively short time. The time savings from using AI would likely be larger today than it was when I tested it, and in a year, it'll be even better. However, in my view, that will still not be able to compete with the long term time savings of a human gaining proficiency. You might disagree with me on that.

But the thing is, that human proficiency isn't just a means to save time on their regular task, but a valuable end in and of itself. That proficiency is how we protect ourselves when things go wrong in unexpected ways. Even if the AI models we're using now could perfectly capture and reproduce the sum of our collected knowledge, I don't believe they can come close to rivalling humans in the realm of creating new knowledge, or adapting to completely novel circumstances. Perhaps some day, that might be possible for AI, but that's not going to be possible with any of the AI architectures that we have today. In the meantime, creative and proficient humans will continue to find ways to exploit the flaws in AI systems, possibly for nefarious ends. A society that relies heavily on AI will need more technical expertise, not less.

"Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do."

The crux of my argument is "how does someone who isn't proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?". Even if hallucination rate continues to drop, it will always be non-zero. Sure, humans are also far from perfect, but that's why so many of our systems include oversight mechanisms that involve many sets of eyes on critical systems; Junior developers are mentored by more experienced devs, who help ensure they don't break stuff with their inexperience (at least, in an ideal world. In practice, many senior Devs are so overworked and stretched thin that they can't give the guidance they should. Again, this is a case for more proficient humans). Replacing proficient humans with AI will build a culture of unquestioningly following the AI. Even if hallucination rate is a fraction of the human error rate, it will always be non-zero, and therefore there will be disasters.

And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?

[–] MangoCats@feddit.it 1 points 7 hours ago

“how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”

What I find most bash scripts to be lacking is consideration of error cases, edge cases, faulty inputs, etc. It's pretty trivial to make a script to copy some files from here to there, but what if the source files are missing, what if the destination has write permission errors, what if the destination already has files with the same names?

My latest Gemini script writing conversation started with "do this in a bash script" and it gave me a nice short script that did that. Then it asked about the edge cases, one by one, and if/how I wanted to handle them. 4/5 of its observations were relevant to the task and I told it to proceed with code to handle those (error out / show help / prompt for additional input / ...), which it added with informative comments about what it was intending to do, and the other cases didn't make sense for the larger picture (which I hadn't explained to it, so no real fault there...)

Yeah, it's still bash glop, and that "shopt -s nullglob" is one of those things that I have to look up when I see it to be sure it does what I think it does, but if you have any reasonable understanding of bash scripts, this is one of the more readable bash scripts I have encountered. As a professional charged with creating the script - it's your job to be sure it's right, not the AI's job, not any more than it was your text editor's responsibility to get it right in the past - even with code completion tools. The AI is a tool that helps put something together for you efficiently, code-completion gone wild, but it's no more responsible for that code than a chainsaw is responsible for where a tree falls.

And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?

8 billion of us are so far down that rabbit hole in so many areas, we'd better make sure it doesn't all go to shit because if/when it does we'll be lucky to have 800,000 humans surviving even 50 years after the SHTF.

[–] MangoCats@feddit.it 1 points 7 hours ago

rather than subordinating themselves to the chatbot.

I find that a great many people prefer to subordinate themselves to "their boss" whoever or whatever that may be... it's just so much easier than fighting for what you might believe "is right" but you are obviously powerless to fix.

when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly.

And that's the difficult thing to measure: is this task just annoyingly packed with detail and volume that you could work through if you spent the time and effort? (If so, AI could be a very useful tool) Or, is this task really beyond your understanding? In which case, you're trusting the AI to fill in your blanks, which is irresponsible and today likely to fail - but in the future there will be a big grey area where the AI is usually "good enough" - but how can you tell? In computer coding, there's a certain amount to be gained by having "independent" AI agents review the code and eventually reach consensus. In other areas, you can leverage AI to do what I have done in the past and teach yourself what you need to know in order to do what you're trying to do. The question there is: how do you know when you have learned enough to actually "know what you are doing" well enough to do it successfully? There are far too many people in the world who are overconfident of their insufficient understanding of what they are messing with, and AI is like a gasoline spray fountain on their smoldering embers.

I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency.

I feel like writing a "guide to AI development" is a bit futile at the moment because by the time you have written it and somebody reads it, the field will have evolved sufficiently to invalidate much of what you wrote. However, one thing that has remained constant over the past 6 months in my opinion is the need for visibility. Don't just ask AI to design you a bridge with construction drawings. Ask it to show its work, include the structural analysis - equations, graphs of the solutions, references to standards - copies of the relevant parts of the standards, enough visibility and detail to spot its mistakes and oversights. In code this includes requirements, implementation plans, test plans, test execution results, traceability from the code to the requirements and tests.

A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting

I find that when I find and fix errors for AI (or junior programmers) it will often proceed to just make the same mistake again, even going so far as to overwrite my working solution with its faulty code again. If, instead, you work with it - Socratic method style - to find the issue, document what went wrong, and solve it for itself, it tends to repeat that particular kind of problem less in the future. Until you start a new project and don't bring over the "memory files" from the old one...

struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time.

I find it's a bit of a mix in that respect. I "learned Rust" by having AI code in Rust for me. I certainly know more about Rust than I did when I started, I certainly have built bigger, more complex, and more successful projects with AI/Rust than if I had just started out plucking away at Rust the way I did BASIC in the 1980s... have I "learned Rust" better, or not as well, by using AI compared to if I had gone at it without AI? Is that even a relevant question? Rust is here, AI is here, it's probably better, or at least more efficient, to learn how to code Rust with AI tools than it is to first learn Rust without AI and then learn all the pitfalls of using AI to code with Rust later... I'm sure if I invested 2000 hours learning Rust without AI I would know more about coding with Rust than I do after having invested 200 hours learning Rust with AI, but is that a comparison that's even worth making?

I did get a little better at prompting the AI

That's a thing that's hard for me to really judge. Me making programs with AI has improved dramatically over the past 6 months, how much of that is the AI models improving? Clearly they are improving, but then, how much is me learning how to work more effectively with AI? I feel like the experience working with the inferior models has been valuable, because the methods I have developed to work with inferior AI models also help get better results from the newer models. If I had waited 12 months to jump in after the models had improved dramatically, I might not be as good at getting results from the superior models because they can at least make something functional with poor prompts, whereas the inferior models wouldn't give you anything of value unless you were using them with some skills of specification, scope and refinement.

the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.

Increasing your own proficiency is an investment well worth making, but after 40 years of coding experience, I find that AI is saving me significant time and effort beyond anything I'm likely to "learn better" before I die. Mostly what AI is good at, for me, is doing the voluminous detail documentation, unit test coverage, reviews for consistency. In development (of anything) there's a tension between single source of truth, don't repeat yourself, and copious examples, unit tests, redundancy of information to ensure that things don't get off-track when you're not looking at them. AI doesn't do it automatically, but you can direct it to constantly review the redundant information for consistency and then fix the unwanted deviations to get back in line with your intent.