A move to combat the continued proliferation of bogus or manipulated clips ahead of the 2020 presidential election
Twitter will report deepfake videos, a new rule to deal with the spread of fake and manipulated clips. The proposal contained in a draft policy published by the social network on its blog. It is clear that in view of the U.S. presidential election in November 2020, social platforms are under pressure to counter the threat of manipulated media, deepfakes, who use artificial intelligence to create realistic videos in which a person seems to say or do something he doesn't actually do.
The 'deepfakes', fake videos made indistinguishable from the original graces to artificial intelligence, are considered a danger, for example, if used to disseminate false information about politicians in the vicinity of elections. 98% of those surveyed online, in a study published recently, concern so far above all women inserted in porn videos. The rules proposed by Twitter, which has left the possibility of commenting to users until November 27, require that the social network insert a note to the tweets that contain deepfakes, and warn users before they share it. The videos would be removed only if considered threatening to someone's physical safety or if they could lead to 'serious harm'.
Why are we doing this?
1We need to consider how synthetic media is shared on Twitter in potentially damaging contexts. 2 We want to listen and consider your perspectives in our policy development process. 3 We want to be transparent about our approach and values.
– Twitter Safety (@TwitterSafety) October 21, 2019
Twitter will report deepfake videos: ?If he turns to Twitter to understand what is happening in the world – reads in the post – we want the user to have an idea of ??the context of the content they are seeing. Deliberate attempts to mislead or confuse people with manipulated media endanger the integrity of the conversation".