this post was submitted on 12 Apr 2026
387 points (96.6% liked)

Technology

83858 readers
5301 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 25 comments
sorted by: hot top controversial new old
[–] null@lemmy.org 78 points 5 days ago (1 children)

This automated system analyzed information across 305 internal servers, rapidly producing 2,597 structured intelligence reports. By automating the data analysis phase, a single operator successfully processed an intelligence volume that would traditionally require an entire team.

That's a great ad, not gonna lie.

[–] Jakeroxs@sh.itjust.works 12 points 4 days ago (3 children)

But everyone on Lemmy said LLMs had no usecases

[–] redsand@infosec.pub 14 points 3 days ago (1 children)

They have lots of use cases for red team. Recon, enumeration, exploit chaining, fuzzing. It doesn't matter if the error rate is 10-20% a shell is a shell

[–] anarchiddy@lemmy.dbzer0.com 2 points 3 days ago (1 children)

I imagine it has plenty of use cases for blue team as well, just not as many for active threat response.

[–] redsand@infosec.pub 3 points 3 days ago* (last edited 3 days ago)

It can help you write the patch. Identify threats in a SIEM or SOAR setup. But I can't think of much else. Defense has to be correct. If your .htaccess file is 99% correct that's a problem

[–] cowboykermit@sh.itjust.works 4 points 3 days ago* (last edited 3 days ago) (3 children)

Can see a few people disagree with you

Does anyone have a good litmus test for when the perspective might shift? TurboQuant making it easier to have larger context windows for local models give me a pinch of hope and I'm really holding out for a decent Open-Weights model I can self host for Home Automation.

I'm fully aware LLMs are just predictive text on roids and we haven't achieved real AGI, but do we know of anything that will help us filter through the marketing?

[–] GamingChairModel@lemmy.world 2 points 2 days ago

You can reason from a few principles:

  • At its core, the math functions being optimized by these AI tools and their specialized hardware is that they can perform inference and pattern recognition at huge scales across enormous data sets.
  • Inferring a rule set for pattern also allows generation of new data that fits that pattern.
  • Some portion of human cognitive work falls within the general framework of finding patterns or finding new data that fits an old pattern.

So when people start making claims about things with clear, objective definitions (a win condition in chess, the fastest route to take through a maze, a highest lossless compression algorithm for real world text), it's reasonable to believe that the current AI infrastructure can lead to breakthroughs on that front. So image recognition, voice recognition, and things like that were largely solved a decade ago. Text generation with clear and simple definitions of good or bad (simple summaries, basic code that accomplishes a clearly defined goal) is what LLMs have been doing well.

On things that have much more fuzzy or even internally inconsistent definitions, the AI world gets much more controversial.

But I happen to believe that finding and exploiting bugs or security vulnerabilities falls more into the well defined problem with well defined successes and failures. So I take it seriously when people claim that AI tools are helpful for developing certain exploits.

[–] Jakeroxs@sh.itjust.works 4 points 3 days ago

You can do it locally now pretty easily depending on your use ass and hardware, huggingface has all the models you'd need and use something like llama-swap

[–] Evotech@lemmy.world 2 points 3 days ago

There’s millions of YouTube videos on this subject.

Qwen3.5 is very capable and you can run it on any hardware you have. Just depends on the model size

[–] chunes@lemmy.world 0 points 3 days ago (1 children)

you're seeing massive cope because most of lemmy is tech workers

[–] CheeseNoodle@lemmy.world 2 points 2 days ago

3D artist here, generative AI models are great at making work that looks super impressive while being completely unuseable for most applications, I suspect this is what most tech workers find too.

[–] EndOfLine@lemmy.world 47 points 4 days ago (2 children)

Is it just me or do the logos for those companies look like drawings of anuses?

[–] veeesix@lemmy.ca 64 points 5 days ago

How kind of the government to provide these tools to ordinary citizens.

[–] potatoguy@mbin.potato-guy.space 41 points 5 days ago (1 children)

Ultimately I noticed a lot of new scams running around and some gov.br websites serving a lot of scams (they existed before only as redirects to scams, now they seem to be hosted in there too). A friend of mine tried to delete a 2FA token from her phone and got calls about that from a completely different "agency".

Seems like the script kiddie bar got higher...

Edit: But on this case, the article doesn't give any source

[–] BrianTheeBiscuiteer@lemmy.world 22 points 5 days ago (2 children)

Wonder if I got a scam call via AI the other day. It seemed more sophisticated than other ones I've gotten and things they said to me were very reassuring but I insisted on calling them back via their public-facing support line. Nobody knew what I was talking about when I called back.

[–] W98BSoD@lemmy.dbzer0.com 16 points 5 days ago

This is what’s going to kill the “we’ll hold your place in line for you and call you back” on the automated telephone systems UNLESS the automated system reads/texts you a random 7+ digit code that the rep that “calls you back” has to provide to you.

Otherwise, how the hell do I know you’re actually from my .

[–] potatoguy@mbin.potato-guy.space 5 points 5 days ago (2 children)

Probably.

I think almost all calls these days are scams, like phishing, advertisements, voice cloning, the usual nigerian prince (he's still alive).

[–] uenticx@lemmy.world 3 points 4 days ago

A good chunk operate out of 217.199.144.0/22 (Physical location) in Kenya. We have lots of fun with their fixed wireless routers.

[–] EvergreenGuru@lemmy.world 3 points 5 days ago

There are so many. And scammers keep lists of victims to do more scams to them. My mom has been scammed a few times in different ways for different amounts of money.

[–] uberdroog@lemmy.world 2 points 3 days ago

Is that the Vonnegut butthole?

[–] SlimePirate@lemmy.dbzer0.com 13 points 4 days ago

Not the vibe script kiddies

[–] Star@lemmy.blahaj.zone 15 points 5 days ago

How inspiring! Truly motivational! ☕️

[–] Danarchy@lemmy.nz 6 points 4 days ago

Viber attack, surely