this post was submitted on 27 Feb 2025
455 points (99.6% liked)
PC Gaming
9674 readers
1115 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I get they want to keep their talents and jobs. But it's just not viable for the future and it has nothing to do with cost.
The future is RPG games where the NPC's can generate their responses in real time and not in text way, but fully voiced. There is no pre-loading responses. The future is curated content responding to the player.
So to achieve that, its either a fully generated voice like a vocaloid or you train an AI on someone's voice.
If the voice actors aren't interested in it being their voice, they'll find someone or go vocaloid.
It's not about saving money. It's about pre-recorded voice lines being dead on arrival.
Think audio books, but choose your own adventure audio books, where all the names/places/things can be curated to the listener. Voice Actor isn't going to be apart of that.
That could be the future, but not anytime soon. I haven't seen anything AI gen that has enough continuity to make "on the fly" story telling something I'd be interested in.
I challenge you on that.
We'll see a Skyrim like game using LLM for NPC's within 3 years, definitely 5.
It'll be marketed as Skyrim with all LLM text and end up as Oblivion with prefab text chunks.
Even disregarding the fact that current LLMs can't stop hallucinating and going off track (which seems to be an inherent property of the approach), they need crazy accounts of memory. If you don't want the game to use a tiny model with a bad quantization, you can probably expect to spend at least 20 gigs of VRAM and a fair chunk of the GPU's power on just the LLM.
What we might see is a game that uses a small neural net to match freeform player input to a dialogue tree. But that's nothing like full LLM-driven dialogue.