TechLich

joined 2 years ago
[–] TechLich@lemmy.world 2 points 1 month ago (1 children)

You could do this with logprobs. The language model itself has basically no real insight into its confidence but there's more that you can get out of the model besides just the text.

The problem is that those probabilities are really "how confident are you that this text should come next in this conversation" not "how confident are you that this text is true/accurate." It's a fundamental limitation at the moment I think.

[–] TechLich@lemmy.world 0 points 1 month ago (1 children)

I feel like this isn't quite true and is something I hear a lot of people say about ai. That it's good at following requirements and confirming and being a mechanical and logical robot because that's what computers are like and that's how it is in sci fi.

In reality, it seems like that's what they're worst at. They're great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they'll get "creative" and if they see a pattern that they can complete, they will, even if it's not correct. I've had copilot start writing poetry in my code because there was a string it could complete.

Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.

Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can't hold the entire thing in its context, or if there's a lot of strict rules to follow, it'll struggle and forget what it's doing or why. Like a particularly bad human programmer would.

This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.

[–] TechLich@lemmy.world 2 points 1 month ago* (last edited 1 month ago) (1 children)

Yeah, I think quite a lot of people on Lemmy have similar social media habits (or lack of) to some degree. We also tend to associate with other people like us. Especially people in tech tend to talk to other tech people, or friends and family of tech people which is a limited demographic.

It's a very different perspective to most people. The average person on the train has vastly different media consumption and likely very different opinions.

There are a lot of people who consult LLMs in most aspects of their lives.

[–] TechLich@lemmy.world 4 points 1 month ago

I dunno about that... Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you're looking at some very expensive hardware that also comes with a power bill.

Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.

Granted you don't need a whole datacenter, but the price is far from zero.

[–] TechLich@lemmy.world 3 points 1 month ago (3 children)

Only one source of social media? That kinda sounds like the definition of a social media bubble...

I oughta know, I'm also in the Lemmy only bubble and am completely out of touch with most people.

[–] TechLich@lemmy.world 4 points 2 months ago

I feel like that would make them much harder to get running on different things. No compiled code means you would have to rewrite the whole game for different instruction sets. Very difficult for anything that isn't x86.

[–] TechLich@lemmy.world 25 points 4 months ago (1 children)

A lot of non-native English speakers use online communication to practice and most want to be corrected so they can improve.

A lot of native English speakers make mistakes accidentally, or speak with a dialect and some of them get really angry when people try to correct them.

It's sometimes tricky to know which is which. The best solution is for everyone to just be kind to each other but...

[–] TechLich@lemmy.world 6 points 6 months ago (1 children)

If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.

They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.

They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."

Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.

Paper here: https://arxiv.org/pdf/2412.04984

Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"

It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).

[–] TechLich@lemmy.world 5 points 7 months ago (1 children)

It's really not. Just because they describe their algorithm in computer science terms in the paper, doesn't mean it's theoretical. Their elastic and funnel examples are very clear and pretty simple and can be implemented in any language you like..

Here's a simple python example implementation I found in 2 seconds of searching: https://github.com/sternma/optopenhash/

Here's a rust crate version of the elastic hash: https://github.com/cowang4/elastic_hash_rs

It's not a lot of code to make a hash table, it's a common first year computer science topic.

What's interesting about this isn't that it's a complex theoretical thing, it's that it's a simple undergrad topic that everybody thought was optimised to a point where it couldn't be improved.

[–] TechLich@lemmy.world 20 points 9 months ago* (last edited 9 months ago) (2 children)

One thing you gotta remember when dealing with that kind of situation is that Claude and Chat etc. are often misaligned with what your goals are.

They aren't really chat bots, they're just pretending to be. LLMs are fundamentally completion engines. So it's not really a chat with an ai that can help solve your problem, instead, the LLM is given the equivalent of "here is a chat log between a helpful ai assistant and a user. What do you think the assistant would say next?"

That means that context is everything and if you tell the ai that it's wrong, it might correct itself the first couple of times but, after a few mistakes, the most likely response will be another wrong answer that needs another correction. Not because the ai doesn't know the correct answer or how to write good code, but because it's completing a chat log between a user and a foolish ai that makes mistakes.

It's easy to get into a degenerate state where the code gets progressively dumber as the conversation goes on. The best solution is to rewrite the assistant's answers directly but chat doesn't let you do that for safety reasons. It's too easy to jailbreak if you can control the full context.

The next best thing is to kill the context and ask about the same thing again in a fresh one. When the ai gets it right, praise it and tell it that it's an excellent professional programmer that is doing a great job. It'll then be more likely to give correct answers because now it's completing a conversation with a pro.

There's a kind of weird art to prompt engineering because open ai and the like have sunk billions of dollars into trying to make them act as much like a "helpful ai assistant" as they can. So sometimes you have to sorta lean into that to get the best results.

It's really easy to get tricked into treating like a normal conversation with a person when it's actually really... not normal.

[–] TechLich@lemmy.world 2 points 11 months ago (1 children)

Friendship drive charging...

[–] TechLich@lemmy.world 42 points 1 year ago* (last edited 1 year ago) (3 children)

Hmmm...

That looks pretty paywally to me. That said, I'm all for people supporting independent media.

view more: next ›