this post was submitted on 09 Dec 2023
239 points (98.8% liked)
Linux
48310 readers
645 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is probably a good thing.
There were no questions about whether maintainers would start utilizing LLMs. The questions were focused on how maintainers would respond to LLM-generated (or -assisted) patches being submitted to them. This attitude seems perfectly reasonable to me, but it would have been more interesting to ask questions about whether maintainers would start using LLMs in their work. Torvalds might have responded with a more interesting answer.
It was interesting to hear your perspective!
I'm a newbie programmer (and have been for quite a few years), but I've recently started trying to build useful programs. They're small ones (under 1000 lines of code), but they accomplish the general task well enough. I'm also really busy, so as much as I like learning this stuff, I don't have a lot of time to dedicate to it. The first program, which was 300 lines of code, took me about a week to build. I did it all myself in Python. It was a really good learning experience. I learned everything from how to read technical specifications to how to package the program for others to easily install.
The second program I built was about 500 lines of code, a little smaller in scope, and prototyped entirely in ChatGPT. I needed to get this done in a weekend, and so I got it done in 6 hours. It used SQLite and a lot of database queries that I didn't know much about before starting the project, which surely would have taken hours to research. I spent about 4 hours fixing the things ChatGPT screwed up myself. I think I still learned a lot from the project, though I obviously would have learned more if I had to do it myself. One thing I asked it to do was to generate a man page, because I don't know Groff. I was able to improve it afterward by glancing at the Groff docs, and I'm pretty happy with it. I still have yet to write a man page for the first program, despite wanting to do it over a year ago.
I was not particularly concerned about my programs being used as training data because they used a free license anyway. LLMs seem great for doing the work you don't want to do, or don't want to do right now. In a completely unrelated example, I sometimes ask ChatGPT to generate names for countries/continents because I really don't care that much about that stuff in my story. The ones it comes up with are a lot better than any half-assed stuff I could have thought of, which probably says more about me than anything else.
On the other hand, I really don't like how LLMs seem to be mainly controlled by large corporations. Most don't even meet the open source definition, but even if they did, they're not something a much smaller business can run. I almost want to reject LLMs for that reason on principle. I think we're also likely to see a dramatic increase in pricing and
enshittification
in the next few years, once the excitement dies down. I want to avoid becoming dependent on this stuff, so I don't use it much.I think LLMs would be great for automating a lot of the junk work away, as you say. The problem I see is they aren't reliable, and reliability is a crucial aspect of automation. You never really know what you're going to get out of an LLM. Despite that, they'll probably save you time anyway.
I think experts are the ones who would benefit from LLMs the most, despite LLMs consistently producing average work in my experience. They know enough to tell when it's wrong, and they're not so close to the code that they miss the obvious. For years, translators have been using machine translation tools to speed up their work, basically relegating them to being translation checkers. Of course, you'd probably see a lot of this with companies that contract translators at pitiful rates per word who need to work really hard to get decent pay. Which means the company now expects everyone to perform at that level, which means everyone needs to use machine translation tools to keep up, which means efficiency is prioritized over quality.
This is a very different scenario to kernel work. Translation has kind of been like that for a while from what I know, so LLMs are just the latest thing to exacerbate the issues.
I'm still pretty undecided on where I fall on the issue of LLMs. Ugh, nothing in life can ever be simple. Sorry for jumping all over the place, lol. That's why I would have been interested in Linus Torvalds' opinion :)