this post was submitted on 28 Mar 2026
220 points (90.4% liked)

Technology

83185 readers
3400 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Full Report(76 Pages PDF).

you are viewing a single comment's thread
view the rest of the comments
[–] cley_faye@lemmy.world 2 points 7 hours ago (1 children)

Thats all there is to it.

Not really. Even with (theoretical) infinite context windows, things would end up getting diluted. It's a statistic machine; no matter how complex we make them look. Even with all the safeguards in place, as these grows larger and larger, each "directive" would end up being less represented in the next token.

People can keep trying to hammer with a screwdriver all they want and keep being impressed when the bent nail is almost flush, though. I'm just enjoying the show from the side at this point.

[–] pixxelkick@lemmy.world 1 points 3 hours ago

Very true, though theres a certain threshold you can get past where the context, at least, is usable in size where the machine can at least hold enough data at once for common tasks.

One of the pieces of tech we are really missing atm is an automation of being able to filter info.

Specifically, for the LLM to be able to "release" info as it goes asap as unimportant and forget it, or at least it gets stored into some form of long term storage it can use a tool to look up.

But for a given convo the LLM can do a lot of reasoning but all that reasoning takes up context.

Itd be nice if after it reasons, it then can discard a bunch of the data from that and only keep what matters.

This eould tremendously lower context pressure and allow the LLM to last way longer memory wise

I think tooling needs to approach how we manage LLM context in a very different way to make further advancement.

LLMs have to be trained to have different types of output, that control if they'll actually remember it or not.