laurel

joined 1 year ago
[–] [email protected] 5 points 1 year ago (2 children)

@ceo_of_monoeye_dating @p @Nerd02 @bmygsbvur @db0 @mint

It's not their model, it's an implementation of the openAI paper from some academics hosted here https://github.com/pharmapsychotic/clip-interrogator/

To be specific they use one of the ViT-L/14 models.
This type of labeling models have been around for a long time. They used to be called text-from-image or some other similar verbose description.

If the current generative models can produce porn then they can also produce CSAM, there's no need to go through another layer.
The issue with models trained on actual illegal material is that then they could be reverse engineered to output the very same material that they have been trained with, in addition to very realistic generated ones. It's similar to how LLMs can be used to extract potentially private information they've been trained with.

[–] [email protected] 4 points 1 year ago (4 children)

@p @Nerd02 @bmygsbvur @ceo_of_monoeye_dating @db0

Compared to what the feds use yeah, but it is a way to leverage legal training material to detect illegal one.
Think of it like this, you have a model that detects pornographic content and another one that detects age of people depicted. You run the image through both and if the result is over some threshold you flag the image.

In this case they use an off the shelf general model that outputs a text description and they just use the raw keyword weights without the sentence generating phase.

[–] [email protected] 4 points 1 year ago (7 children)

@p @ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0

> I don't know if this tool downloads a model
It's just a model that provides text descriptions for the images fed to it. The tool does some keyword searches on the output to detect illegal material.