I hadn't heard about Copilot telling people how to activate Windows 11 without a license, though. So I'll thank OP for that bit of levity.
Jesus_666
Yes; even in late 2001 I already saw a lot of agreement that "the terrorists have won". The post-9/11 era saw a large jump in executive power, deepened internal political division, an erosion of individual rights, and an upswing in fear and xenophobia. All of this led to decreased stability and that's exactly what the attacks were aiming for.
In the end the attacks only accelerated an ongoing process of decay but they did so very effectively.
It'll be marketed as Skyrim with all LLM text and end up as Oblivion with prefab text chunks.
Even disregarding the fact that current LLMs can't stop hallucinating and going off track (which seems to be an inherent property of the approach), they need crazy accounts of memory. If you don't want the game to use a tiny model with a bad quantization, you can probably expect to spend at least 20 gigs of VRAM and a fair chunk of the GPU's power on just the LLM.
What we might see is a game that uses a small neural net to match freeform player input to a dialogue tree. But that's nothing like full LLM-driven dialogue.
Last generation I got a 4080, hoping that it'd mean I wouldn't have to upgrade again for the next five years. That 4080 turned out to be unstable and had terrible drivers even on Windows so I got an XTX instead, which I'm very happy with.
I'd still want my GPU to last a while but I gotta be honest, I am curious about what kind of performance gains AMD have on tap this time around.
I know I sorted by feed by Top 6 Hours but that doesn't mean I expect six hours worth of text in a single image. Did they copy and paste three different job postings together? Did they use a LLM that had its stop token configured incorrectly? Is it an attempt at weeding out people who object to having their time wasted by corporate bullshit?
We may never know. What we do know is that this wall of text has more red flags than a Chinese military parade.
That undersells them slightly.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They're good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can't generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they'll also happily generate complete bullshit answers and to them there's no difference to a real answer.
They're text transformers marketed as general problem solvers because a) the market for text transformers isn't that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.