Excrubulent

joined 1 year ago
[–] Excrubulent@slrpnk.net -2 points 3 weeks ago* (last edited 3 weeks ago)

Why do they have to "WANT" that? Ignoring the fact that they literally said they were happy it was changed back, why does that matter to the criticism? If it's true, it's true, and the fact that corporations are the ones in a position to habitually make terrible decisions about FOSS is a big problem. It's valid to point out that it would be good to find a better way.

If anything it sounds like you "WANT" to ignore it.

[–] Excrubulent@slrpnk.net 10 points 3 weeks ago* (last edited 3 weeks ago)

The phrase "synthesised expert knowledge" is the problem here, because apparently you don't understand that this machine has no meaningful ability to synthesise anything. It has zero fidelity.

You're not exposing people to expert knowledge, you're exposing them to expert-sounding words that cannot be made accurate. Sometimes they're right by accident, but that is not the same thing as accuracy.

You confused what the LLM is doing for synthesis, which is something loads of people will do, and this will just lend more undue credibility to its bullshit.

[–] Excrubulent@slrpnk.net 1 points 1 month ago* (last edited 1 month ago)

Also, you'll talk to me after it's a solved problem? Why would I be interested in that? You have no interest in helping solve it now and I see no reason why you'd magically become useful after the fact.

[–] Excrubulent@slrpnk.net 1 points 1 month ago* (last edited 1 month ago)

If you can demonstrate that you even understood the concept of decentralised torrent-like hosting then I'll pay attention to whatever else you had to say.

[–] Excrubulent@slrpnk.net 2 points 1 month ago* (last edited 1 month ago) (3 children)

What are you talking about? I don't think you understood the concept of decentralised torrent-like hosting.

I'm currently talking to a peertube hoster about server costs, which I may be able to justify to host my own videos plus a little extra to pitch in for others who can't justify the expense. Plenty of professional creators could easily justify it as an exit strategy or backup for youtube.

These conversations are happening, just not with you, presumably because you're just being negative about it and not actually doing something, so why would anyone bother to bring it up with you?

[–] Excrubulent@slrpnk.net 4 points 1 month ago (5 children)

Take out the phone part and allow users to host videos in a decentralised way on their home computers and it's a genuinely good idea though. I have a server running with plenty of storage and reasonable upload speed. I could easily dedicate a terabyte or so, as long as I'm not the sole hoster.

It would be a hell of a lot cheaper than dedicated hosting. The only issue is legal problems when someone is unknowingly hosting abuse material, which is something that happens from time to time on all services like this, and an individual could be done for distribution without the protection big centralised services have. You'd just have to hope mods are on top of it.

Actually something like a debrid service but for peertube might work. You can get huge amounts of storage for cheap because a lot of it is shared, you might ask them to host a huge torrent file, but most torrent files serve multiple users, so the cost is distributed. Peertube could work a similar way if it were more mainstream.

[–] Excrubulent@slrpnk.net 11 points 1 month ago

Almost like it does work on Firefox but for some reason they don't want you using it. Honestly it's so damn weird, why do that? Is there some incentive for them?

[–] Excrubulent@slrpnk.net 3 points 1 month ago (1 children)

My apologies, I see that I have made a mistake. There are in fact 3 w's in the sentence "Howard likes strawberries."

[–] Excrubulent@slrpnk.net 18 points 1 month ago* (last edited 1 month ago)

It's an illusion. People think that because the language model puts words into sequences like we do, there must be something there. But we know for a fact that it is just word associations. It is fundamentally just predicting the most likely next word and generating it.

If it helps, we have something akin to an LLM inside our brain, and it does the same limited task. Our brains have distinct centres that do all sorts of recognition and generative tasks, including images, sounds and languge. We've made neural networks that do these tasks too, but the difference is that we have a unifying structure that we call "consciousness" that is able to grasp context, and is able to loopback the different centres into one another to achieve all sorts of varied results.

So we get our internal LLM to sequence words, one word after another, then we loop back those words via the language recognition centre into the context engine, so it can check if the words match the message it intended to create, it checks them against its internal model of the world. If there's a mismatch, it might ask for different words till it sees the message it wanted to see. This can all be done very fast, and we're barely aware of it. Or, if it's feeling lazy today, it might just blurt out the first sentence that sprang to mind and it won't make sense, and we might call that a brain fart.

Back in the 80s "automatic writing" took off, which was essentially people tapping into this internal LLM and just letting the words flow out without editing. It was nonesense, but it had this uncanny resemblance to human language, and people thought they were contacting ghosts, because obviously there has to be something there, right? But it's not, it's just that it sounds like people.

These LLMs only produce text forwards, they have no ability to create a sentence, then examine that sentence and see if it matches some internal model of the world. They have no capacity for context. That's why any question involving A inside B trips them up, because that is fundamentally a question about context. "How many Ws in the sentence "Howard likes strawberries" is a question about context, that's why they screw it up.

I don't think you solve that without creating a real intelligence, because a context engine would necessarily be able to expand its own context arbitrarily. I think allowing an LLM to read its own words back and do some sort of check for fidelity might be one way to bootstrap a context engine into existence, because that check would require it to begin to build an internal model of the world. I suspect the processing power and insights required for that are beyond us for now.

[–] Excrubulent@slrpnk.net 3 points 1 month ago (3 children)

I'd be happy to help! There are 3 "w"s in the string "Howard likes strawberries".

[–] Excrubulent@slrpnk.net 5 points 1 month ago

(though some might consider this an anti-feature)

To be fair, not everyone would say that, and the only reason you would call it an "anti-feature" is if you had an accurate understanding of the issues.

view more: ‹ prev next ›