this post was submitted on 19 Jan 2022
10 points (91.7% liked)

Asklemmy

44847 readers
911 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

I am worried about the criminal malice of deep fakes. I'm not worried about deep fakes being difficult to identify (thought that is a possibility), since context is important; rather I am worried about the response to malicious use of deepfakes by governments.

Will governments even attempt to reduce the destructive potential of deepfakes? I'm doubtful considering political corruption.


Deep fakes could be useful tools for people that have difficulties with neurotypically social communication, or just increase acceptance of nonneurotypical communication.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 3 years ago* (last edited 3 years ago)

That is great news. However deepfakes and deepfake detection will become a rat race; deepfakes could just improve off deepfake detection software.

  • There's also the possibility of false identification.

Deepfakes will likely be imperceptible to humans, even with neurodivergent humans that have hypersensitivities.