this post was submitted on 09 Dec 2023
239 points (98.8% liked)

Linux

48323 readers
919 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

At Open Source Summit Japan, Linux and Git creator Linus Torvalds talked about Rust in Linux, Linux maintainer fatigue, and AI's future role in Linux and open-source development.

you are viewing a single comment's thread
view the rest of the comments
[–] thatsnothowyoudoit@lemmy.ca 56 points 11 months ago* (last edited 11 months ago) (4 children)

You’re conferring a level of agency where none exists.

It appears to “understand.” It appears to be “knowledgeable. “

But LLMs do neither of those things.

Take this note from an OpenAI dev:

It’s that these models have leveraged so much data they’ve been able to map out relationships between words (or images) in way as to be able to generate what seem like new versions of those things.

I grant you that an LLM has more base level knowledge than any one human, but again this is thanks to terrifyingly large dataset and a design that means it can access this data reasonably reliably.

But it is still a prediction model. It just has more context, better design and (most importantly) data to make predictions at a level never before seen.

If you’ve ever had a chance to play with a model at level where you can control some of its basic parameters it offers a glimpse into just how much of a prediction machine it can be.

My favourite game for a while was to give midjourney a wildly vague prompt but crank the chaos up to 100 (literally the chaos flag at the highest level) to see what kind of wild connections exist but are being filtered out during “normal” use.

The same with the GPT-3.5 API in the “early days” - you could return multiple versions of the response and see the sausage being made to a very small degree.

It doesn’t take away from the sense of magic using these tools. It just helps frame what’s going on under the hood.

load more comments (1 replies)