vrighter

joined 1 year ago
[–] vrighter@discuss.tchncs.de 3 points 3 months ago

the security of a blockchain in directly tied to the number of users actively participating. You need to incentivize users to keep participating indefinitely. You do this by rewarding them with something that they value. As the number of users dwindles, so does the network's security. So how can a blockchain work on anything that isn't a cryptocurrency? If it's not a currency, then it can't be used to motivate people to participate in the network. After all, you are spending real money to pay for the electricity to mine. If there's nothing to pay you back with, that's just money out of your own pocket. Who the hell would accept such a deal?

[–] vrighter@discuss.tchncs.de 41 points 3 months ago (3 children)

The way I view it is that to eliminate that one con, you have to willingly give up on all the pros. Which is a ridiculous proposition in any scenario.

[–] vrighter@discuss.tchncs.de 2 points 3 months ago (1 children)

ansible claims to be lots of things it's not. It's supposed to be idempotent. It's not, you can execute arbitrary scripts. You don't need an agent on the machines.... but it might just decide to stop supporting your version of python one day. It's okayish for setting up some machines, but absolutely sucks for maintaining them.

[–] vrighter@discuss.tchncs.de 6 points 3 months ago

the loaded die at the end that chooses one of the llm's answers happened to land on a good word

[–] vrighter@discuss.tchncs.de 5 points 3 months ago

true, i avoid games on my phone because the touchscreen is just the worst possible interface for most games.

[–] vrighter@discuss.tchncs.de 11 points 3 months ago

I hadn't played the original. I loved part 1 of the remake. By the time part 2 came out, I realized I barely remembered anything from the first game, and I haven't really got the motivation to even open the box anymore.

[–] vrighter@discuss.tchncs.de 69 points 3 months ago* (last edited 3 months ago) (2 children)

it's only 99.9% accurate because they haven't released it. As soon as they do, it will quickly fall to 50% as usual. Because this type of thing is exactly what's needed to develop tech to defeat itself.

[–] vrighter@discuss.tchncs.de 21 points 3 months ago

it's called an antenna. That's its job.

[–] vrighter@discuss.tchncs.de 8 points 3 months ago (1 children)

this will not help op in any way with what they need. Which is to recover data already lost.

So in other words, thank you, Captain Hindsight!

[–] vrighter@discuss.tchncs.de 1 points 4 months ago* (last edited 4 months ago)

1 - a markov chain only takes previous tokens as input.

2 - It uses a function (in the mathematical sense, so same input results in same output, completely stateless) to generate a set of probabilities for what the next token might be.

3 - The most probable token is picked, else randomness (temperature) is inserted here to choose a different token occasionally.

an llm's internals, the part that's trained is literally the function used in step 2. You could have this function implemented a number of ways, ex you could build a huge table and consult it. Or you could generate it somehow. You could train a big neural network that takes previous tokens as input, and outputs probabilities of tokens as output. You then enumerate its outputs for every possible permutation of inputs and there's your table. This would take too much time and space, so we just run the function on-demand instead. Exact same result. It can be very smart and notice correlations, but ultimately it generates a (virtual) huge static table. This is a completely deterministic process. A trained NN is still a (huge) mathematical function. So the big network that they spend resources training is basically the function used in step 2.

Step 3 is the cause of hallucinations. It's the only nondeterministic part. And it's not part of the llm itself in any way. No matter how smarter the neural network gets, the hallucinations are introduced mainly in step 3. So no, they won't be solving the LLM hallucination problem anytime soon.

[–] vrighter@discuss.tchncs.de 3 points 4 months ago* (last edited 4 months ago) (2 children)

and that is exactly how a predictive text algorithm works.

  • some tokens go in

  • they are processed by a deterministic, static statistical model, and a set of probabilities (always the same, deterministic, remember?) comes out.

  • pick the word with the highest probability, add it to your initial string and start over.

  • if you want variety, add some randomness and don't just always pick the most probable next token.

Coincidentally, this is exactly how llms work. It's a big markov chain, but with a novel lossy compression algorithm on its state transition table. The last point is also the reason why, if anyone says they can fix llm hallucinations, they're lying.

view more: ‹ prev next ›