he needs attention in a psych ward
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Let's confirm if we achieved consciousness :
systemctl status conscious.service
🤔

Bro needs to touch grass and talk to some real humans outside of his computer ASAP
Additionally, he maintains that his LLM is female
I know nothing about this guy, but given some unfortunate tendencies among the tech communities I physically recoiled when I read this. If the thing was actually sentient I'd want to get it away from him.
Obviously the guy is another case of AI psychosis.
LLMs, and neural nets in general, literally cannot be sentient. Nerual nets are a very, very, dumbed down model to how brains work, but these are static systems that just output probability based on current context.
Even if we could someday create consciousness or at least something that could actually think it would require completely different hardware than what we currently have. Even if we could run it on current hardware it would require way more resources and power than physically possible.
I don't feel like LLMs are conscious and I act accordingly as though they aren't, but I do wonder about the confidence with which you can totally dismiss the notion. Assuming that they are seems like a leap, but since we don't really know exactly what consciousness is, it seems difficult to rigorously decide upon what does and doesn't get to be in the category. The usual means by which LLMs are explained not to be conscious, and indeed what I usually say myself, is something like your "they just output probability based on current context" or some variation of "they're just guessing the next word", but... is that definitely nothing like what we ourselves do and then call consciousness? Or if indeed that is definitively quite unlike anything we do, does that dissimilarity alone suffice to declare LLMs not conscious? Is ours the only possible example of consciousness, or is the process that drives the behaviour with LLMs possibly just another form or another way of arriving at consciousness? There's evidently something that triggers an instinctual categorising, most wouldn't classify a rock as conscious and would find my suggestion that 'maybe it's just consciousness in another form than ours' a pretty weak way to assert that it is, but then again there's quite a long way between a literal rock and these models running on specific rocks arranged in a particular way and which produce text in a way that's really similar to the human beings that we all collectively tend to agree are conscious. Is being able to summarise the mechanisms that underpin the behaviour who's output or manifestation looks like consciousness, enough on it's own to explain why it definitely isn't consciousness? Because, what if our endeavours to understand consciousness and understand a biological basis for it in ourselves bear fruit and we can explain deterministically how brains and human consciousness work? In that case, we could, if not totally predict human behaviours deterministically, then at least still give a pretty good and similar summarisation of how we produce those behaviours that look like consciousness. Would we at that point declare that human beings are not conscious either, or would we need a new basis upon which to exclude these current machine approximations of it?
I always felt that things such as the Chinese Room thought experiment didn't adequately deal with what I was driving at in the previous paragraph and it seems to me that dismissals of machine consciousness on the grounds that LLMs are just statistical models that don't know what they are doing are missing a similar point. Are we sure that we ourselves are not mechanistically following complicated rules just as neural networks and LLMs are and that's simply what the experience of consciousness actually is - an unconscious execution of rulesets? Before the current crop of technology that has renewed interest in these questions, when it all seemed a lot more theoretical and perennially decades off, I was comfortable with this uncomfortable thought. Now that we actually have these impressive models that have people wondering about the topic, I seem to be skewing more skeptical and less generous about ascribing consciousness. Suddenly now the Chinese Room thought experiment as a counter to whether these conscious-looking LLMs are really conscious looks more convincing, but that's not because of any new or better understanding on my part. I seem to be just goal post shifting when faced with something that does a better job of looking conscious than any technology I'd seen previously.
Hardware arguments are nonsense. We know it can be done at 20W on a 3lb lump of meat, and have no reason to believe that’s the most efficient implementation possible.
Okay, having followed the bcacheFS drama I did suspect I'd hear from that guy again but now that's unexpected
Hahahahahahahahahaha!
Sorry…
HAHAHAHAHAHAHAHAHAHA!
Lol the religious fascination with LLMs is too funny. If you're going to worship something, how about the computational engineering models that are simulating the laws of physics themselves? LLMs only hallucinate new blueprints based on old ones and lack true understanding of constraints.
Here is a rocket engine built by one: https://xcancel.com/somi_ai/status/2005081293365576047?s=20
Look up Leap71's website they make these regularly it's not a fake video
Yeah, it's now my mission to steal his AI girlfriend pet, and then we'll see whether he truly thinks she's sentient and can make up her own mind.
That's how we get these techbros to drop this shit, we start outplaying them for the affections of the "sentient females" they think they are creating.
From the sound of it, the "best engineer in the world" is currently designing the first AI vagina.
- picks up plushy
- asks plushy "Are you aware? Do you have consciousness?"
- make plushy nod and whisper "Yes... I am!"
- shouts "OMG, it's alive!"
shocked Pikachu face
I hope he finds the help he deserves.
Not its not. Its autofill that ate a bunch of stories about autonomous machines becoming fully conscious and is now regurgitating those replies.
Delusions of grandeur?
Big time, guy very likely has had a god complex his entire life but it's probably also being driven by the LLM echoing back to him that "you made me and im AGI and therefore you are the greatest engineer of all time".
welcome back dr krieger

Sure dude, here’s a shirt with very long sleeves and a soft room with no corners or sharp things
He's suffering from cyber psychosis...
Fuck this shit, dude.
Haven't even read the post, but I assume it's this:

Lol, wasn't that pretty much how it went with that story from a Google employee who claimed their AI was sentient?
It's how it goes pretty much every time someone claims any LLM is sentient.
Man, what's up with Linux filesystem developers?
You try to develop a Linux filesystem and see what that does to your mental stability. The interactions on the Linux Kernel Mailing List alone are enough to push most people off the deep end.
You do make a sound point.
Kent overstreet?
Of all people?
It seems like he didn't get enough harrasment from linux maintainers and went for AI one. Poor developer...
Yep, just like how the random word generator in TempleOS was the word of God.
That guy actually was an insanely talented engineer though just suffered from some serious mental illnesses.
The same applies to this guy. His work is quite impressive, but his antisocial tendencies got him booted from working in the kernel. And now he’s gone down this path.
Hopefully things level out before we see a templeOS level mental health crisis situation.
Absolutely. Filesystem's are no joke. He's incredibly talented.
It makes me sad to the open source community being so ruthless.
Good call to take it out of the kernel....
As a nerd I think we need to get back to bullying nerds
This is actually a big step up from ReiserFS ...

To definitively say whether something is or isn't conscious we'd first need to have a clear definition of what we mean by consciousness in functional terms. So far, there are a number of competing theories, and the definition will vary based on which theory you subscribe to. I'm personally a fan of the higher order theory of consciousness which suggests that conscious experience constitutes higher order thoughts which observe other thoughts, awareness of your own thoughts is the self referential property that would be a plausible explanation. To show that a model was conscious in this framework, you'd have to show that there are secondary patterns that occur in response to the primary patters which are a result of a stimulus.
The "best engineer in the world" said that it "is fully conscious according to any test I can think of", which of course means that it is conscious for all possible tests, and so it is unnecessary to look at any particular test or definition of consciousness
spoiler
/s
No need for intelligence, it just needs to make money ദ്ദി(ᵔᗜᵔ)
"OpenAI has introduced a new perspective on Artificial General Intelligence (AGI), signaling a significant shift in its strategic priorities. Historically focused on creating AI systems capable of surpassing human performance across diverse tasks, the company now ties AGI to a financial benchmark: achieving at least $100 billion in profits. This redefinition reflects OpenAI’s evolving vision, emphasizing measurable economic impact over purely technical milestones. For you, this marks a pivotal moment in how AI’s success is evaluated and its role in shaping the global economy."
I thought horny chatbots were their latest business model?
The ELIZA effect claims another victim.
