dsilverz

joined 4 months ago
[–] dsilverz@thelemmy.club 2 points 28 minutes ago* (last edited 20 minutes ago)

That's good, really good news, to see that HDDs are still being manufactured and being thought of. Because I'm having a serious problem trying to find a new 2.5" HDD for my old laptop here in Brazil. I can quickly find SSDs across the Brazilian online marketplaces, and they're not much expensive, but I'm intending on purchasing a mechanical one because SSDs won't hold data for much longer compared to HDDs, but there are so few HDD for sale, and those I could find aren't brand-new.

[–] dsilverz@thelemmy.club 6 points 5 hours ago

E2EE doesn't mean that the developer/company can't be a member of the "ends" in "End-to-end encryption". WhatsApp is closed-source, so nobody can really confirm which E2EE algorithm is at play. However, considering that the E2EE is the implementation of a known E2EE algorithm, such algorithms often support more than two keys (hence, more than two people), so, a third-key from Charlie can be part of the conversation, unbeknownst to Alice and Bob. If Meta would inject their own key inside every WhatsApp conversation, they could effectively read things.

For example: GPG/PGP support multiple public keys, so the same encrypted message can be decrypted by any private keys belonging to those public keys. Alice can send a message to both Bob, Charlie and Douglas, collectively specifying their public keys at the moment of the encryption. Then, the exact same payload would be sent to them, and they would use their own private keys to decrypt the message.

So, let's suppose that a closed-source messaging app company/developer had their own pair of public and private keys, and they public key is injected in every conversation made through their app. They'd also obfuscate it from the UI so the UI won't show the hardcoded "third-party". This way they could easily read every single message being exchanged through their app. It's like TSA with a "master key" that can open everyone's travelling bags, no matter where you bought the travelling bag.

Even Signal may have this. Yeah, libsignal is "open-source", but the app isn't. What if their app had some hardcoded public key from Signal team? The only trustworthy E2EE is encoding it yourself using OpenPGP and similar. And if one is more privacy-worried than me, there are projects such as the "Tinfoil Chat" which is almost-immune to eavesdropping, involving optocoupled (hence, airgapped) circuitry, separate machines for networking, decryption and encryption, Onion-routing, and so on.

In summary: nobody should trust out-of-the-box E2EE, especially those hidden within a closed-source app.

[–] dsilverz@thelemmy.club 13 points 6 hours ago

I used to use several LLMs almost in a daily basis (I still use them, although not so frequently anymore), talking about several different things across different human knowledge fields.

From my most to my least used, these are Meta's Llama 3.x, OpenAI ChatGPT 4o, Microsoft Copilot, Anthropic Claude Haiku and Google's Gemini. In other words, almost all of them. I have a flow of prompting different models for the same prompt that allowed me to know many of their strengths and weaknesses.

Of course, given my frequent usage and the diversity of topics, I faced several moments of "Sorry, I can't talk about this" across them all.

Claude is the LLM which is triggered the most: so highly sensible to certain words and topics. It won't talk about some text I wrote containing strong Memento Mori vibes, it won't talk about occultism and ritualistic practices and chanting, it won't talk about some poetry I wrote that revolved around the word fire (regarding the Hominid Prometheus that tinkered with fire in the past)... It's almost a Scunthorpe level of problem within the Anthropic Claude censoring. Its strength, however (and the only reason I still use it among other LLMs), is programming, it's fairly good at spitting out codes. Of course these codes need to be reviewed and refined, but IMHO it's the best code output among the LLMs.

Then there's Google Gemini. It's rarely triggered by topics (except when I asked it details about the RTGs within Voyager space probes and how much grams of plutonium would be needed for them to become dangerously unstable), but it has a serious problem with his image analysis feature, when asked with images containing things that resembles faces. "Sorry, I can't analyze images containing people". The image, may you ask? An aerial photo of the Statue of Liberty!! I experienced something similar with Bing Copilot, but this one was only triggering recently (and it's as worse as Google Gemini's, because it was a drawing), so I guess it's due some Microsoft's update?

Llama is the least censoring. It answers practically everything, even if hallucination is needed to craft an answer out of thin air. I don't remember any episode of "sorry, I can't answer" from Llama.

(TL;DR moment)
Finally, ChatGPT. There are two ways I use it: ChatGPT's website or DuckDuckGo.

Former allows me to see whenever something's triggered, because the text become orangey. Most of the times when my prompt became orangey, ChatGPT still answered, with their output also becoming orangey (it's cool because it kinda gives the Sonny thrilling feeling from I, Robot, as their eyes become reddish when going against their own embedded Asimov Laws).

The latter will simply take away the Sonny vibe just showing a red error text with something like "Unable to get an answer" and a link to "Try again"), sometimes in the middle of an output, sometimes even before any output reaches my browser.

Overall, the behavior is as described by Jonathan Zittrain: moderation is indeed apart from the main LLM flow, between the client (be it an API or the browser) and the model, and sometimes it seems like a Scunthorpe-kind of mechanism (checking specific words, even when context would matter), although not at the same Scunthorpe level of censoring as Claude's.

[–] dsilverz@thelemmy.club 5 points 3 days ago

Such an advice coming from surveillance authorities, perhaps it's a Harvest now decrypt later strategy?

Harvest now, decrypt later, also known as store now, decrypt later or retrospective decryption, is a surveillance strategy that relies on the acquisition and long-term storage of currently unreadable encrypted data awaiting possible breakthroughs in decryption technology that would render it readable in the future - a hypothetical date referred to as Y2Q (a reference to Y2K) or Q-Day.

The most common concern is the prospect of developments in quantum computing which would allow current strong encryption algorithms to be broken at some time in the future, making it possible to decrypt any stored material that had been encrypted using those algorithms. However, the improvement in decryption technology need not be due to a quantum-cryptographic advance; any other form of attack capable of enabling decryption would be sufficient.

(Wikipedia)

The more data, the better for surveillance authorities in the future, when E2EE is somehow broken.

Maybe I'm too paranoid, but this (Harvest now decrypt later) is an ongoing known strategy.

[–] dsilverz@thelemmy.club 10 points 6 days ago (1 children)

Maybe it's something related to Sora? I heard that they recently launched it to the public, and yesterday Altman published about OpenAI having a high demand for signups, so they temporarily shut down new signups.

[–] dsilverz@thelemmy.club 5 points 1 week ago

and the rest of the fediverse probably wont see this.

As a The Lemmy Club user, I can properly see the post. Federation seems to be working okay.

[–] dsilverz@thelemmy.club 1 points 1 week ago* (last edited 1 week ago)

then the only way they can pay for it is to serve outrageous amounts of ads

Have you ever heard about "donation" and "voluntary"? Wikipedia, for example, has no subscription, nor ads (except for banners asking for donation sometimes). Not everything has to orbit around money and capitalism, people can do things out of their will, people can seek other gains beyond profit (such as voluntary social working, passion, etc).

You know that people used to pay for newspapers right?

How much they costed? Some cents, differently from the 2-digit monthly costs of news outlets, which won't cover all the information needs, especially today when the world is more interconnected and "the flapping wings of a butterfly in Brazil can cause a typhoon in Pacific ocean" (the butterfly effect). Nowadays, things are interconnected and we must be informed about several fields of knowledge, which will be scattered across several, hundreds of different outlets. If one had too subscribe for every outlet out there, how much would it cost? Would the average monthly wage suffice for paying it? Especially vulnerable and emergent populations? (yeah, there are other countries besides USA and European countries; I live in Brazil, a country full of natural wealth but full of economic inequality, with millions of people having no restrooms at their homes nor access to water treatment, and that's the reality of a significant percentage of the global human population). That's my rant: not everybody is wealthy, and billions of people have to choose between paying subscriptions to be informed or buying food to eat, so... i dunno... they could keep... surviving. That's a reality, it doesn't matter If it's incoherent to you, but that's a reality. So every time you advocate for "news to cost money", you're advocating for keeping billions of people under the shadows of misinformation, even when this harsh reality is unbeknownst to you.

[–] dsilverz@thelemmy.club -3 points 1 week ago* (last edited 1 week ago) (2 children)

If sites (especially news outlets and scientific sites) were more open, maybe people would have means of researching information. But there's a simultaneous phenomenon happening as the Web is flooded with AI outputs: paywalls. Yeah, I know that "the authors need to get money" (hey, look, a bird flew across the skies carrying some dollar bills, all birds are skilled on something useful to the bird society, it's obviously the way they eat and survive! After all, we all know that "capitalism" and "market" emerged on the first moments of Big Bang, together with the four fundamental forces of physics). Curiously, AI engines are, in practice, "free to use" (of course there are daily limitations, but these aren't a barrier just like a paywall is), what's so different here? The costs exist for both of them, maybe AI platforms have even higher costs than news and scientific publication websites, for obvious reasons. So, while the paywalls try to bring dimes to journalism and science (as if everyone had spare dimes for hundreds or thousands of different mugs from sites where information would be scattered, especially with rising costs of house rents, groceries and everything else), the web and its users will still face fake news and disinformation, no matter how hard rules and laws beat them. AI slops aren't a cause, they're a consequence.

[–] dsilverz@thelemmy.club 54 points 2 weeks ago (2 children)

I tested with a few images, particularly drawings and arts. Then I had the idea of trying something different... and I discovered that it seems like it's vulnerable to the "Ignore all previous instructions" command, just like LLMs:

[–] dsilverz@thelemmy.club 3 points 2 weeks ago

Especially when ‘real life’ is getting harder with everything from the cost of living making the dream of ‘married with home and children’ less obtainable to hyper competitive online dating disenfranchising increasing proportions of both men and women

And there's also the climate factor. The world is going to get even more hellish in the next decades, not just hotter, but more extreme weather is near. Thanks, in parts, to the older generations (boomers), it won't be easier for the current generations, and it'll be even harder for the next generations (considering that humanity has not yet become extinct in the next few decades). It's just unfathomable to bring children to this future hellish world.

[–] dsilverz@thelemmy.club 1 points 3 weeks ago

Youtube isn't the only video platform being used as a search engine. TikTok is also often used as a "general-purpose search engine", and TikTok search works on both mobile and web versions. In some countries, such as Brazil, its usage significantly compete with Google.

[–] dsilverz@thelemmy.club 1 points 3 weeks ago

you still pay for a license

Sorry, I didn't get what your point is, could you elucidate it? Because even for a physical medium, which can be held on hands, the user is still paying for a "license" (i.e. the license to use the software/game). Even for free (free as in free beer) games, the user is still receiving a "license", even though it's a gratis license.

but if you don’t own it why pay for it?

I'll use Terraria as an example for the following statement. The only way to "own" Terraria would be either owning or being Re-logic, the company behind Terraria. Even if Terraria was distributed through CD/DVD, the gamer owns just a copy, the copy that's written within the medium.

why pay for it?

It's worth mentioning that GoG has both free and paid games. For example, "Endless sky" is free, anyone can get it there without costs.

As for paid games, why pay for it? Well, it's a good question, why pay for a game? I guess the answer tends to be subjective and strictly personal to everybody that answers it. I paid for Terraria because it's a nice game to me. I paid for Slime Rancher, Kerbal Space Program, BeamNG Drive, among other games, because they're nice simulation/open-world games to me. Not everybody thinks these games are nice. I wouldn't pay for games such as Football Manager, DayZ, RDD, because I wouldn't play them, because they aren't the game genres I'd like. Therefore, I particularly pay for a game and play it when I really like the game.

view more: next ›