there is already an AI that can identify deep fakes https://hai.stanford.edu/news/using-ai-detect-seemingly-perfect-deep-fake-videos
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
There's also an AI that "detects" deep fakes even when they're real, and your political opponents will use it for plausible deniability.
Since those whose opinions you care about can't tell the difference between the two AIs, it doesn't really matter that one's real and the other is fake.
That is great news. However deepfakes and deepfake detection will become a rat race; deepfakes could just improve off deepfake detection software.
- There's also the possibility of false identification.
Deepfakes will likely be imperceptible to humans, even with neurodivergent humans that have hypersensitivities.
Will governments even attempt to reduce the destructive potential of deepfakes? Iโm doubtful considering political corruption.
They'll only start caring when they deepfake a high-ranking politician deep throating a massive horse cock and it begins to affect their electability. Even then they will be too old and technology illiterate to make any meaningful action on it.
I was thinking a similar idea.
It's a cool, but dangerous tool in the wrong hands
I think it could help in movies, I think movies will advance to a level when you 3D scan someone in from the real-world, or based on some images and then you maybe can create movies with artists that are long dead. Keanu said similar stuff already in the Matrix 4 interview. I think that might be a feature and deep-fakes play a role in it.
Govt. only intervene if there is a direct threat, technology such as deep-fake is per-se no threat as long as everyone can tell what reality is and what not. Assuming someone would fake something e.g. political meeting via deep-fake etc they still can tell this is a fake, because you can check the position or call the real people. So in other words it is easy to bust such fakes.
In general fakes getting better, tech evolves but counter-measures to spot fakes also evolve which is a natural process. Govt. cannot just ban specific technology whenever they want it to because people would anyway find ways eg. most deep-fake software is open sources, so forking it would be pretty easy and removing something afterwards is pretty pointless.
I would not worry too much dude.
In general fakes getting better, tech evolves but counter-measures to spot fakes also evolve which is a natural process.
Deepfakes often use a type of AI called a Generational Adversarial Network (GAN). Oversimplified: When you want to make a deepfake, you create two sets of AIs, one set to create deepfakes and one to detect them. They are pitted against each other and only the best at either are used to derive new versions of each type. This means both the generation and detection methods get better as the system runtime increases and the deepfake becomes more convincing, though usually the AIs are only tuned to that one specific instance, so of you set out to create a deepfake that merges hypothetical people Jack and Jill, they can only make deepfakes of Jack and Jill, and not Romeo and Juliet. For Romeo and Juliet, you would have to start the process all over again.
Keep on mind that all the creating new versions of the AIs stuff is automated, so you can churn out convincing deepfakes of tons of people with very little human intervention. This is part of the concern with deepfakes: any script kiddie can make them. It's not limited to big budget Hollywood studios or governments.
The use of GANs also raises concerns about how exactly you develop a detector for deepfakes if you're external to the process of creating them.
hate it
... rather I am worried about the response to malicious use of deepfakes by governments.
While this may be true in some cases, I would be more concerned with the use of deepfakes by political parties (not directly those in power, but the actual party offices) and misinformation generators to skew narratives.