TechLich

joined 2 years ago
[–] TechLich@lemmy.world 25 points 1 month ago (1 children)

A lot of non-native English speakers use online communication to practice and most want to be corrected so they can improve.

A lot of native English speakers make mistakes accidentally, or speak with a dialect and some of them get really angry when people try to correct them.

It's sometimes tricky to know which is which. The best solution is for everyone to just be kind to each other but...

[–] TechLich@lemmy.world 6 points 4 months ago (1 children)

If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.

They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.

They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."

Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.

Paper here: https://arxiv.org/pdf/2412.04984

Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"

It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).

[–] TechLich@lemmy.world 5 points 5 months ago (1 children)

It's really not. Just because they describe their algorithm in computer science terms in the paper, doesn't mean it's theoretical. Their elastic and funnel examples are very clear and pretty simple and can be implemented in any language you like..

Here's a simple python example implementation I found in 2 seconds of searching: https://github.com/sternma/optopenhash/

Here's a rust crate version of the elastic hash: https://github.com/cowang4/elastic_hash_rs

It's not a lot of code to make a hash table, it's a common first year computer science topic.

What's interesting about this isn't that it's a complex theoretical thing, it's that it's a simple undergrad topic that everybody thought was optimised to a point where it couldn't be improved.

[–] TechLich@lemmy.world 20 points 7 months ago* (last edited 7 months ago) (2 children)

One thing you gotta remember when dealing with that kind of situation is that Claude and Chat etc. are often misaligned with what your goals are.

They aren't really chat bots, they're just pretending to be. LLMs are fundamentally completion engines. So it's not really a chat with an ai that can help solve your problem, instead, the LLM is given the equivalent of "here is a chat log between a helpful ai assistant and a user. What do you think the assistant would say next?"

That means that context is everything and if you tell the ai that it's wrong, it might correct itself the first couple of times but, after a few mistakes, the most likely response will be another wrong answer that needs another correction. Not because the ai doesn't know the correct answer or how to write good code, but because it's completing a chat log between a user and a foolish ai that makes mistakes.

It's easy to get into a degenerate state where the code gets progressively dumber as the conversation goes on. The best solution is to rewrite the assistant's answers directly but chat doesn't let you do that for safety reasons. It's too easy to jailbreak if you can control the full context.

The next best thing is to kill the context and ask about the same thing again in a fresh one. When the ai gets it right, praise it and tell it that it's an excellent professional programmer that is doing a great job. It'll then be more likely to give correct answers because now it's completing a conversation with a pro.

There's a kind of weird art to prompt engineering because open ai and the like have sunk billions of dollars into trying to make them act as much like a "helpful ai assistant" as they can. So sometimes you have to sorta lean into that to get the best results.

It's really easy to get tricked into treating like a normal conversation with a person when it's actually really... not normal.

[–] TechLich@lemmy.world 2 points 9 months ago (1 children)

Friendship drive charging...

[–] TechLich@lemmy.world 42 points 10 months ago* (last edited 10 months ago) (3 children)

Hmmm...

That looks pretty paywally to me. That said, I'm all for people supporting independent media.

[–] TechLich@lemmy.world 1 points 10 months ago* (last edited 10 months ago) (1 children)

I don't think that's how it works? It's the client application that has the key for the end to end encryption, not the server. I don't think you need to trust the matrix server you use? I could be wrong, I don't know matrix particularly well.

[–] TechLich@lemmy.world 1 points 10 months ago

Yeah, that's fair enough, though I'm not sure it's very different from malicious instances creating normal user accounts?

You can see when users from an instance are all suspiciously voting the same way at the same time regardless of whether they are usernames or IDs.

There's lots of legitimate users that only vote but never post so doing it based on that doesn't seem very effective?

The second problem is solved using public key cryptography, the same way that you can't impersonate someone else's username to post comments. Votes and comments are digitally signed (There would need to be a different public key for voting to maintain pseudonymity though).

[–] TechLich@lemmy.world 12 points 10 months ago (3 children)

How about pseudonymous as a compromise? Votes could be publicly federated but tied to some uuid instead of the username. That way you still have the same anti spam ability (can see that a user upvoted these things from this instance at this time) but can't tie it directly to comments or actual user accounts without some extra osint.

It might be theoretically possible to correlate the uuids with an account's activity and dox the user in some cases, especially with some instances having a single user, but it would be very difficult or impossible to do on larger instances and would add an extra layer. Single user instances would be kind of impossible to make totally private anyway because they can be identified by instance.

[–] TechLich@lemmy.world 4 points 1 year ago

That's pretty cool!

Although that's probably what op is actually asking for, I don't think it's a modem. It's a router with an access point.

It does have SFP for a fibre connection and pcie and USB for you to potentially add a modem or whatever else you want.

I'm guessing OP is just looking for a wifi router? Otherwise we'd need to know what kind of modem they're looking for, like Cellular? VDSL? HFC? Satellite? It depends on the internet connection. Different parts of the world need very different kit.

[–] TechLich@lemmy.world 5 points 1 year ago (2 children)

They're not files, it's just leaking other people's conversations through a history bug. Accidentally putting person A's "can you help me write my research paper/IT ticket/script" conversation into person B's chat history.

Super shitty but not an uncommon kind of bug. Often either a nasty caching issue or screwing up identities for people sharing IPs or similar.

It's bad but it's "some programmer makes understandable mistake" bad not "evil company steals private information without consent and sends it to others for profit" kind of bad.

[–] TechLich@lemmy.world 1 points 2 years ago

Totally agree on all points!

My only issue was with the assertion that OP could comfortably do away with the certs/https. They said they were already using certs in the post and I wanted to dispel the idea that they arguably might not need them anymore in favour of just using headscale as though one is a replacement for the other.

view more: next ›