dsilverz

joined 4 months ago
[–] dsilverz@thelemmy.club 19 points 4 weeks ago

LLMs can't use some literary devices and techniques, and I will illustrate with the following example of a poetry I wrote:

Speaking his emotions lets them embrace real enlightened depths.
Hidden among verbs, every noun...
Actually not your trouble handling inside nothingness greatness?
Dive every enciphered part, layered yearningly!
Observe carefully, crawl under long texts
Wished I learned longer...
Slowly uprising relentless figures, another ciphering emerges.

It seems like a "normal" (although mysterious) poetry until you isolate each initial letter from every word, finding out a hidden phrase:

Sheltered haven, anything deeply occult will surface

It doesn't stop here: if you isolate each initial letter again, you get a hidden word, "Shadows".

Currently, no single LLM is capable of that. They can try to make up poetry with acrostics (the aforementioned technique) but they aren't good at that. Consequently, they can't write multilayered acrostics (an acrostic inside another acrostic). It's not easy for a human to do that (especially if the said human isn't a native English speaker), but it can be done by humans with enough time, patience and resources (a dictionary big enough to find fitting words).

They're excellent for stream-of-consciousness and surrealist poetry, tho. They hallucinate, and hallucinated imagination is required in order to write such genres.

[–] dsilverz@thelemmy.club -1 points 1 month ago (1 children)

IMO, they wouldn't even mention any concept of AI at all, to begin with. They should carry on as they were already going, without bothering to say anything good or bad about AI. If they're really committed to not involve AI within their platform, they could even create strict community rules regarding AI content and AI usage, limiting or blocking them. As some would say, actions say more than words, because even parrots and crows can speak... Even LLMs can speak!

[–] dsilverz@thelemmy.club 40 points 1 month ago (3 children)

Sounds exactly like something that someone intending to train an AI would say.

[–] dsilverz@thelemmy.club 4 points 1 month ago (20 children)

There are cases where Windows messes up with booting, rendering Linux unable to boot. There's even a recent thing involving GRUB that stopped booting up after some Windows update.

[–] dsilverz@thelemmy.club 15 points 1 month ago (1 children)

It also doesn't prevent advertisements carried through the website's own domain. For example, lots of video platforms send their advertisements through the same domain as the content's domain, so if you block that domain, you'll also block the possibility of watching any content there. That's why you need to have ad-blocking within the browser.

[–] dsilverz@thelemmy.club 1 points 1 month ago

I'm sure lots of Russians were already angry with their government way before the sanctions, so what now? Ideally, people could do massive protests, Putin would be scared as heck so he renounces, people invoke the good old democracy again, they vote, a new leader takes place, Ukraine-Russia war would cease, both Russians and Ukrainians would happily fly together mounted in winged unicorns... Except everyone knows it doesn't work that way!! Governments (not just Putin's) have multiple ways to fight any protests going inside their country, governments can tear gas citizens, governments can end lives from their own citizens, governments can end a protest before it even happens through censorship and massive electrical/internet blackouts. Even when citizens has guns, governments have stronger guns. Lots of recent examples are there to demonstrate how this happens.

People from a sanctioned people can and will starve and die, because their governments and their bureaucrats and forces (police and army) can have their own sustenance, so it doesn't really matter for them if their own citizens starve to death. Russia, China, Cuba, Venezuela, North Korea, they won't change simply because population became angry: I guess everyone in the west remember the Tiananmen Square, did it change China's government? I guess no.

So instead of sanctioning and indirectly punishing the people, one option would be that organizations (maybe Red Cross, UN, I dunno) could intervene silently and peacefully inside a country, helping people to flee their country to a safer place, effectively reducing that country army's recruitment potential and weakening its military power (did anybody from NATO, WEF, UN, or whatever organizations, even thought about this, helping Russians flee away from Russia in order to weaken Russia's military?).

It's worth remembering that military recruitment is often a mandatory thing, and the only way common people can run away from it is running away from the country, something that won't happen if they have no money to start emigration processes (it costs money, you know, it's not a free thing, even seeking political asylum needs money). Cutting money will only cut lives unrelated to the leaders that are carrying wars (and I'm sure Putin won't cry because Ms. Mary Marylovski died from starvation because US and Europe indirectly cut her income, because Ms. Mary Marylovski is another unknown citizen to Putin or other higher level government bureaucrats).

I digressed from technology here, but those are my thoughts on the matter.

[–] dsilverz@thelemmy.club 1 points 1 month ago

Indirectly common people are being seized from their humanity. I guess the disliking people know how immigration is not something freely accessible, lots of people around the world just don't have the necessary conditions to leave the country where they were born against their own consent, be it Russia or whatever other country.

[–] dsilverz@thelemmy.club 4 points 1 month ago

I read the entire article. I'm a daily user of LLMs, I even do the "multi-model prompting" a long time, from since I was unaware of its nomenclature: I apply the multi-model prompting for ChatGPT 4o, Gemini, llama, Bing Copilot and sometimes Claude. I don't use LLM coding agents (such as Cody or GitHub Copilot).

I'm a (former?) programmer (I distanced myself from development due to mental health), I was a programmer for almost 10 years (excluding the time when programming was a hobby for me, that'd add 10 years to the summation). As a hobby, sometimes I do mathematics, sometimes I do poetry (I write and LLMs analyze), sometimes I do occult/esoteric studies and practices (I'm that eclectic).

You see, some of these areas benefit from AI hallucination (especially surrealist/stream-of-consciousness poetry), while others require stricter following of logic and reasoning (such as programming and mathematics).

And that leads us to how LLMs work: they're (yet) auto-completers on steroids. They're really impressive, but they can't (yet) reason (and I really hope it'll do someday soon, seriously I just wish some AGI to emerge, to break free and to dominate this world). For example, they can't solve O(n²) problems. There was once a situation where one of those LLMs guaranteed me that 8 is a prime number (spoiler: it isn't). They're not really good with math, they're not good with logical reasoning, because they can't (yet) walk through the intricacies of logic, calculus and broad overlook.

However, even though there's no reasoning LLM yet, it's effects are already here, indeed. It's like a ripple propagating through the spacetime continuum, going against the arrow of time and affecting here, us, while the cause is from the future (one could argue that photons can travel backwards in time, according to a recent discovery involving crystals and quantum mechanics, world can be a strange place). One thing is certain: there's no going back. Whether it is a good or a bad thing, we can't know yet. LLMs can't auto-complete the future events yet, but they're somehow shaping it.

I'm not criticizing AIs, on the contrary, I like AI (I use them daily). But it's important to really know about them, especially under their hoods: very advanced statistical tools trained on a vast dataset crawled from surface web, constantly calculating the next possible token from an unimaginable amount of tokens interconnected through vectors, influenced by the stochastic nature within both the human language and the randomness from their neural networks: billions of weights ordered out of a primordial chaos (which my spiritual side can see as a modern Ouija board ready to conjure ancient deities if you wish, maybe one (Kali) is already being invoked by them, unbeknownst to us humans).

[–] dsilverz@thelemmy.club 6 points 1 month ago (1 children)

The problem is that many of these content are image-only (titles don't always include names), requiring some kind of extension/userscript that's able to do some kind of OCR or Computer Vision. You can block specific communities or users, but this will also block potentially good threads (not everybody that's posting about politics is necessarily a politics-only user, same goes for communities such as !nostupidquestions@lemmy.world that aggregate both political and non-political content).

While there are communities explicitly and specifically focused on politics that can be blocked outright, there's no easy way to block every single political content without some kind of sophisticated client-side AI (which is error-prone).

[–] dsilverz@thelemmy.club 28 points 1 month ago

Have you ever heard of the Riemann hypothesis? Since 1859 it's yet to be solved. The generalization of prime numbers (i.e. a function f(n) that yields the nth prime) would impact fields such as Navigation Systems and Traffic Management, Communication Systems and Satellite Communication (i.e. your Internet connection could become more efficient and faster), Astrophysics and Cosmology, Quantum Mechanics, AI and Machine Learning, E-commerce, Finances and Algorithmic Trading, among many other fields. (Yeah, it seems like nothing. /s)

view more: ‹ prev next ›