this post was submitted on 21 Feb 2025
73 points (89.2% liked)

Cybersecurity

6409 readers
346 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected]

Notable mention to [email protected]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 1 week ago (1 children)

This is the worst AI, just in case anyone is wondering. The only thing that's interesting is that you can see it reasoning and thinking before it answers questions.

Otherwise, it's just trash. It makes huge mistakes, it hallucinates often, and it takes more processing power to do those things than other popular alternative AI's.

[โ€“] [email protected] 7 points 1 week ago* (last edited 1 week ago)

All LLMs lie, this is why it's important to verify what output you're getting. A GPT is essentially text prediction that has been trained on a very large dataset, think of when you end up sending "ducking autocorrect" in a text. Furthermore, Deepseek has distillations of many models. Which do you have experience using?

https://github.com/deepseek-ai/DeepSeek-R1

Edit: To add even more context GPT and Diffusion models are patently not AI as they are not able to verify what output they're giving. These are all tokens that are fed into a recursive algorithm. They're vector database queries that have reinforced pathing. None of these "A.I." models are thinking or reasoning, yet.