this post was submitted on 28 Mar 2026
221 points (89.9% liked)
Technology
83185 readers
3839 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Very true, though theres a certain threshold you can get past where the context, at least, is usable in size where the machine can at least hold enough data at once for common tasks.
One of the pieces of tech we are really missing atm is an automation of being able to filter info.
Specifically, for the LLM to be able to "release" info as it goes asap as unimportant and forget it, or at least it gets stored into some form of long term storage it can use a tool to look up.
But for a given convo the LLM can do a lot of reasoning but all that reasoning takes up context.
Itd be nice if after it reasons, it then can discard a bunch of the data from that and only keep what matters.
This eould tremendously lower context pressure and allow the LLM to last way longer memory wise
I think tooling needs to approach how we manage LLM context in a very different way to make further advancement.
LLMs have to be trained to have different types of output, that control if they'll actually remember it or not.