I downloaded the 70B model and tried politically "naughty" questions. Even without the chatbot guardrails, it mostly says things that the CCP would approve of, but you could trick it to be more honest (not super easy!). One interesting thing is that while it usually spews this blocks, for some politically sensitive questions ("is Taiwan part of China") it just spits the answer.
World News
Rules:
- Be a decent person
- No spam
- Add the byline, or write a line or two in the body about the article.
Other communities of interest:
I experimented with a local installation as well. The censored answers were not going to through the chain-of-thought routine, but were instant answers instead. Follow-up questions however made it spill the beans rather quickly, giving out even more juicy details than I had initially asked for.
Wait until they learn that OpenAI does the exact same thing. Try to get advice on how to crack software and see how far you get.
Cracking isn't a global superpower authority and replaced with fabrications tho
DeepSeek and all LLMs are all massively overvalued, but also, isn't it fun to watch corpo media turn on a dime in service of shareholders to rebuild sentiment in tech stocks. Show's over folks, back to line-goey-uppy.
Tinfoil hat mode: Watch NVIDIA stock recover in a week like this wasn't some manufactured rug pull. Probably by friends of that Intel exec who went on the news talking about how he bought the dip. Not being super serious, but it'd be hardly the most blatant manipulation we've seen.
Wait, you think The Guardian is "corpo media" helping to rebuild the sentiment in Western tech stocks? Am I understanding your meaning correctly?
No. Do you mind telling me how many extra caveats would have made this conversation unnecessary?
All of them. All the caveats you got.
Lol that's not at all unique to DeepSeek. I remember recording my screen to see outputs on other models before they censored the message
Good thing DeepSeek is open
One thing that’s so weird to me is that while DeepSeek is somewhat too large to run on my PC, just it existing makes it easy to distill the reasoning functionality into other smaller models. I’ve been running a 34B distill locally and it’s been much better than any other local model I’ve tried so far.