this post was submitted on 23 Dec 2025
842 points (97.6% liked)
Technology
79476 readers
5138 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've been coding for a while. I did an honest eager attempt at making a real functioning thing with all code written by AI. A breakout clone using SDL2 with music.
The game should look good, play good, have cool effects, and be balanced. It should have an attractor screen, scoring, a win state and a lose state.
I also required the code to be maintainable. Meaning I should be able to look at every single line and understand it enough to defend its existence.
I did make it work. And honestly Claude did better than expected. The game ran well and was fun.
But: The process was shit.
I spent 2 days and several hundred dollars to babysit the AI, to get something I could have done in 1 day including learning SDL2.
Everything that turned out well, turned out well because I brought years of skill to the table, and could see when Claude was coding itself into a corner and tell it to break up code in modules, collate globals, remove duplication, pull out abstractions, etc. I had to detect all that and instruct on how to fix it. Until I did it was adding and re-adding bugs because it had made so much shittily structured code it was confusing itself.
TLDR; LLM can write maintainable code if given full constant attention by a skilled coder, at 40% of the coder's speed.
It would be really interesting to watch a video of this process. Though I'm certain it would be pretty difficult to pull off the editing.
One of the first videos I watched about LLMs, was a journalist who didn't know anything about programming used ChatGPT to build a javascript game in the browser. He'd just copy paste code and then paste the errors and ask for help debugging. It even had to walk him through setting of VS Code and a git repo.
He said it took him about 4 hours to get a playable platformer.
I think that's an example of a unique capability of AI. It can let a non-programmer kinda program, it can let a non-Chinese speaker speak kinda Chinese, it'll let a non-artist kinda produce art.
I don't doubt that it'll get better, but even now it's very useful in some cases (nowhere near enough to justify the trillions of dollars being spent though).
Yeah, I'm not sure the way we allocate resources is justified either, in general. I guess ultimately the problem with AI is that it gives access to skills to capital that they would otherwise have to interact with laborers to get.
I think that people are too enthralled with the current situation that's centered around LLMs, the massive capital bubble and the secondary effects from the expansion of datacenter space (power, water, etc).
You're right that they do allow for the disruption of labor markets in fields that were not expecting computers to be able to do their job (to be fair to them, humanity has spent hundreds of millions of dollars designing various language processing software and been unable to engineer the software to do it effectively).
I think that usually when people say 'AI' they mean ChatGPT or LLMs in general. The reason that LLMs are big is because neural networks require a huge amount of data to train and the largest data repository that we have (the Internet) is text, images and video... so it makes sense that the first impressive models were trained on text and images/video.
The field of robotics hasn't had access to a large public dataset to train large models on, so we don't see large robotics models but they're coming. You can already see it, compare robotic motion 4 years ago using a human engineered feedback control loop... the motions are accurate but they're jerky and mechanical. Now look at the same company making a robot that uses a neural network trained on human kinematic data, that motion looks so natural that it breaks through the uncanny valley to me.
This is just one company generating data using human models (which is very expensive) but this is the kind of thing that will be ubiquitous and cheap given enough time.
This isn't to mention the AlphaFold AI which learned how to fold proteins better than anything human engineered. Then, using a diffusion model (the same kind used in making pictures of shrimp jesus) another group was able to generate the RNA which would manufacture new novel proteins that fit a specific receptor. Proteins are important because essentially every kind of medication that we use has to interact with a protein-based receptor and the ability to create, visualize and test custom proteins in addition to the ability to write arbitrary mRNA (see, the mRNA COVID vaccine) is huge for computational protein design (responsible for the AIDS vaccines).
LLMs and the capitalist bubble surrounding them is certainly an important topic, framing it as being 'against AI' creates an impression that AI technology has nothing positive to offer. This reduces the amount of people who study the topic or major in it in college. So in 10 years, we'll have less machine learning specialists than other countries who are not drowning in this 'AI bad' meme.