Captain

joined 2 years ago
MODERATOR OF
[–] [email protected] 2 points 2 years ago (1 children)

Well done, congratz!

[–] [email protected] 1 points 2 years ago (1 children)

Awesome, congratulations!

I've heard good things about the AWS Security Specialty certificate too. I've done a course for it which was great, though I never bothered to take the certificate (I don't feel the need for it). Have you considered it?

 

A very interesting approach. Apparently it generates lots of results: https://twitter.com/feross/status/1672401333893365761

4
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

They used OpenSSF Scorecard to check the most starred AI projects on GitHub and found that many of them didn't fare well.

The article is based on the report from Rezilion. You can find the report here: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape (any email name works, you'll get access to the report without email verification)

[–] [email protected] 2 points 2 years ago

My take so far is that there isn't really any great options to protect against prompt injections. Simon Wilson presents an idea here on his blog which could is a bit interesting. NVIDIA has open sourced a framework for this as well, but it's not without problems. Otherwise I've mostly seen prompt injection firewall products but I wouldn't trust them too much yet.

[–] [email protected] 2 points 2 years ago (1 children)

I think this post ended up in the wrong place, I suspect you meant to post it to https://infosec.pub/c/infosecpub

[–] [email protected] 2 points 2 years ago

Good points, and I agree!

The list is currently largely made to spark interest and discussion so it'll likely change a lot. What you mentioned is also brought up on the Brainstorming page. It seems likely that "Inadequate Alignment" will be removed from the list.

view more: next ›