0xCBE

joined 2 years ago
MODERATOR OF
 

cross-posted from: https://infosec.pub/post/397812

Automated Audit Log Forensic Analysis (ALFA) for Google Workspace is a tool to acquire all Google Workspace audit logs and perform automated forensic analysis on the audit logs using statistics and the MITRE ATT&CK Cloud Framework.

By Greg Charitonos and BertJanCyber

 

We’ve made a few changes to the way we host and distribute our Images over the last year to increase security, give ourselves more control over the distribution, and most importantly to keep our costs under control [...]

 

This first post in a 9-part series on Kubernetes Security basics focuses on DevOps culture, container-related threats and how to enable the integration of security into the heart of DevOps.

[–] [email protected] 2 points 2 years ago

nice! I didn’t know this plant. I’ll try to find some.

[–] [email protected] 8 points 2 years ago (2 children)

I like basil. At some point I i got tired of killing all the plants and started learning how to properly grow and care greens with basil.

It has plenty of uses and it requires the right amount of care, not too simple not too complex.

I’ve grown it from seeds, cuttings, in pots, outside and in hydroponics.

[–] [email protected] 1 points 2 years ago

Maybe it's enough to make a pull request to the original CSS files here? I would guess the Lemmy devs would rather focus more on the backend right now

[–] [email protected] 0 points 2 years ago (2 children)

great! Have you consider packing this up as a full theme for Lemmy?

[–] [email protected] 1 points 2 years ago

nice instance!

[–] [email protected] 1 points 2 years ago

ahah thank you, we shall all yell together then

[–] [email protected] 4 points 2 years ago (1 children)

This stuff is fascinating to think about.

What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.

Let's say we can't validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.

In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs... which is a castration of the whole AI vision?

I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?

[–] [email protected] 1 points 2 years ago

👋 infra sec blue team lead for a large tech company

view more: next ›