this post was submitted on 08 Sep 2024
393 points (98.8% liked)
Games
16800 readers
680 users here now
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
- News oriented content (general reviews, previews or retrospectives allowed).
- Broad discussion posts (preferably not only about a specific game).
- No humor/memes etc..
- No affiliate links
- No advertising.
- No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
- No self promotion.
- No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
- No politics.
Comments.
- No personal attacks.
- Obey instance rules.
- No low effort comments(one or two words, emoji etc..)
- Please use spoiler tags for spoilers.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities:
Beehaw.org gaming
Lemmy.ml gaming
lemmy.ca pcgaming
founded 1 year ago
MODERATORS
you guys joke but AI npcs have the potential of being awesome
A really good place would be background banter. Greatly reducing the amount of extra dialogues the devs will have to think of.
Sure, you'll have to make a TTS package for each voice, but at the same time, that can be licensed directly by the VA to the game studio, on a per-title basis and they too, can then get more $$$ for less work.
They won't because of hallucinations. They could work in mature games though where its expected that whatever the AI says is not going to break your brain.
But yeah a kid walks up to toad in the next Mario game and toad tells Mario to go slap peaches ass, that game would get pulled really quick.
Oh come on, LLMs don't hallucinate 24/7. For that, you have to ask a chatbot to say something it wasn't properly trained for. But generating simple texts for background chatter? That's safe and easy. The real issue is the amount of resources required by modern LLMs. But technologies tend to become better with time.
I still really don't understand what amount of local resources it would require to run a trained LLM