cley_faye

joined 1 year ago
[–] cley_faye@lemmy.world 2 points 3 weeks ago

I use this setup for my personal passwords, using nextcloud as the sync solution. A semi-fix for that was using Keepass2Android (on Android obviously). It integrates with nextcloud directly, keep a local DB of passwords, and would only load the remote one (and merge) on unlock and updates, not keeping it "constantly" sync on every remote change. It works well… most of the time… with only two devices that almost always have connection to the server… and for only one user.

It's overly clunky though. It's the big advantage of "service based" password manager against "single file based" ones. They handle sync. We have plans to move to bitwarden at my workplace, and since the client supports multiple accounts on multiple servers, I'll probably move to that for personal stuff too. The convenience is just there, without downside.

[–] cley_faye@lemmy.world 6 points 3 weeks ago

Except for the part that it's not a question of trust (being open source), there's no third-party architecture to trust (it can and should be self-hosted), the data on the server are also encrypted client-side before leaving your device, sure.

Oh, and you also get proper sync, no risk of desync if two devices gets a change while offline without having to go check your in-house sync solution, easy share between user (still with no trust needed in the server), all working perfectly with good user UI integration for almost every systems.

Yeah, I wonder why people bother using that, instead of deploying clunky, single-user solution.

[–] cley_faye@lemmy.world 3 points 3 weeks ago

Not exactly, no. From other comments, it also have an incredibly high false positive rate, so it's negative security.

[–] cley_faye@lemmy.world 11 points 3 weeks ago

Look, we can either look at facts and check the claims of that company that we're going to invest a lot of money into, or we can accept their bribe and move on. It's all about efficiency.

[–] cley_faye@lemmy.world -4 points 1 month ago (1 children)

Some footage of tesla's full self driving disagrees.

[–] cley_faye@lemmy.world 7 points 1 month ago (1 children)

AI will not find a magic solution. Besides, we already have quite a few directions that would help, but we're not acting on them. Pilling more "solutions" over them won't change that.

This really sounds like the parody of rich people that think they can eat and breath safely as long as they have money, the rest of the world be damned.

[–] cley_faye@lemmy.world 0 points 1 month ago

You're right, they aren't google. Not for lack of trying though.

You see posts putting some shade over Mozilla, and your immediate reaction is "it feels almost coordinated". Well, that may be. But it would be hard to distinguish a "coordinated attack" from a "that's just the things they're doing, and there's report on it" article, no? Especially when most of it can be fact-checked.

In this particular case, those abandoned projects got picked up by other… sometimes. And sometimes not. But they were abandoned. There's no denying that.

If you want some more hot water for Mozilla, since you're talking about privacy and security, you'd be interested in their recent switch regarding these points. Sure, the PR is all about protecting privacy and users, but looking into the acts, the message is a bit more diluted. And there's always a fair amount of people that are ready to do the opposite of what you claims; namely discarding all criticism because "Mozilla", when the same criticism are totally fair play when talking about other big companies.

Being keen on maintaining user privacy, system security, and trust, is not the same as picking a "champion" and sticking to it until the end. Mozilla have been doing shady things for half a decade now, and they should not get a free pass because they're still the lesser evil for now.

[–] cley_faye@lemmy.world 23 points 1 month ago

"curated wallpapers" including random generated stuff, and "shares profits" on a 50/50 basis, for a shitty app developed by what looks like three fivers in a trench coat.

[–] cley_faye@lemmy.world 1 points 1 month ago

The point is, they don't get "competent". They get better at assembling pieces they were given. And a proper stack with competent developers will already have moved that redundancy out of the codebase. For whatever remains, thinking is the longest part. And LLM can't improve that once the problem gets a tiny bit complex. Of course, I could end up having a good rough idea of what the code should look like, describe that to an LLM, and have it write actual code with proper variable names and all, but once I reach the point I can describe accurately the thing I want, it's usually as fast to type it. With the added value that it's easier to double check.

What remains is providing good insight on new things, and understanding complex requirements. While there is room for improvement, it seems more and more obvious that LLM are not the answer: theoretically, they are not the right tool, and seeing the various level of improvements we're seeing, they definitely did not prove us wrong. The technology is good at some things, but not at getting "competent".

Also, you sweep out the privacy and licensing issues, which are big no-no too.

LLM have their uses, I outline some. And in these uses, there are clear rooms for improvements. For reference, the solution I currently use puts me at accepting around 10% of the automatic suggestions. Out of these, I'd say a third needs reworking. Obviously if that moved up to like, 90% suggestions that seems decent and with less need to fix them afterward, it'd be great. Unfortunately, since you can't trust these, you would still have to review the output carefully, making the whole operation probably not that big of a time saver anyway.

Coding doesn't allow much leeway. Other activities which allow more leeway for mistakes can probably benefit a lot more. Translation, for example, can be acceptable, in particular because some mishaps may automatically be corrected by readers/listeners. But with code, any single mistake will lead to issues down the way.

[–] cley_faye@lemmy.world 3 points 1 month ago

It is perfectly possible to run anti-cheat that are roughly as good (or as bad, as it often turns out) without full admin privilege and running as kernel-level drivers. Coupled with server-side validation, which seems to be a dying breed, you'd also weed out a ton of cheaters while missing the most motivated of them.

As someone who lurks around in different communities (to some extent; Steam forums, reddit, lemmy, mastodon, and a few game-centered discord servers), the issue is not much against anti-cheat for online play. It's the nature of these piece of software that is the issue. It would not be the same if the anti-cheat was also forced on solo gameplay, but it is not the case here.

(bonus points for systems that allow playing on non-protected servers, but that's asking a bit too much from some publishers I suppose)

[–] cley_faye@lemmy.world 19 points 1 month ago

Aside from it being code you don’t want on your machine

Code you don't want on your machine, that have sometimes more permissions than you yourself have on your own files, is completely opaque, and have the legitimacy to keep constant outgoing network data that you can't audit.

Yes, aside for that, no reason at all. No problem with a huge risk on your privacy for moderate results that don't particularly benefit you in the long run.

(and all that is assuming that they're not nefarious to begin with, which is almost impossible to prove)

view more: next ›