The #WomenBoycottTwitter protest prompted Twitter CEO Jack Dorsey to promise in the rules to prevent harassment and abuse on Twitter
Harassment e abuses on Twitter they should have their days numbered. Twitter sent an email to its Trust & Safety Council about how the company intends to limit abusive behavior, after the #WomenBoycottTwitter, #MeToo, #QuellaVoltaChe protest pushed the CEO to promise a change.
New rules against abuse on Twitter
Twitter is providing more details on the review of its harassment policies after a high-profile protest and a series of tweets from CEO Jack Dorsey, which stated that some changes would come soon.
6 / We decided to take more aggressive stance in our rules and how we enforce them.
jack (@jack) 14 October 2017
The company updated its Council for confidence and security on the changes made to its content policies. The changes include the fact that users who have received unwanted sexual advances on the social network will be able to report them. Twitter also wants to prohibit creep shots and content derived from hidden cameras under the category of "non-consensual nudity".
Twitter has given its Council of Trust and Security a list of new rules which intends to enforce to combat abuses on Twitter.
Twitter abuse: an image will hide violent content
The company intends hide hate symbols behind an image that warns of unpleasant content, but has not yet clarified what you consider symbols of hatred.
In his email to the Council of Trust and Security, Twitter stated that he could also take unspecified actions against "organizations that use / have historically used violence as a means to advance their cause".
The update came four days after Dorsey's tweet about the fact that the social network could launch changes on how it monitors content to protect its 328 million users from online bullying and harassment. Dorsey's tweets came in response to the #WomenBoycottTwitter protest, which he pushed women to give up Twitter for a day to put pressure on Twitter to improve the way they control content.
For abuses on Twitter, the council added that it is acting fast to make changes.
"We hope that our approach and upcoming changes, as well as our collaboration with the Trust & Safety Council, show how seriously we are rethinking our rules and how quickly we move to update our policies and how we apply them", The company said.
The Anti-Defamation League, which is working with Twitter and other technology companies to combat the abuse, praised the move.
"We are pleased to see that Twitter is responding with new concrete actions, including the aggressive application of the rules and hiding the symbols of hatred, "said ADL CEO Jonathan Greenblatt. "Given the seriousness of the threat, there is much to do"he stressed.
Stephen Balkam, CEO of the Family Online Safety Institute, a member of the Twitter trust and security council, he said Tuesday that he was impressed by the changes and that he intends to ask the council, a group of over 60 organizations and experts working to prevent abuse, to meet and review changes "as soon as possible ".
"This is just another indication of the maturity of the company," Balkam said. "I would like to have a full and robust discussion on the changes and on what else needs to be done."
Dorsey said in his tweets on Friday that the changes will take effect in the "coming weeks" and Balkam also referred to a similar period of time.
The abusive behavior has been a misfortune for the social network for years; some particularly negative events took place last year, including an episode of mass hatred of Leslie Jones, star of the "Ghostbusters" movie of last summer.
Twitter Trust at the Trust & Safety Council
Dear members of the Trust & Safety Council, I would like to follow Jack's Tweetstorm on Friday night on the upcoming policy changes and changes to the application. Some of these have already been discussed with you in previous conversations about updating Twitter rules. Others are the result of internal conversations we had during the past week.
Here are some more information on the policies that Jack mentioned and some other updates that will be launched in the coming weeks.
We treat people who originally post badly nonden non-consensual tweets in the same way as we do with people who could tweet the content while ignoring its features. In both cases, people are required to delete the tweets in question and their accounts are temporarily blocked. They are permanently suspended if they charge non-consensual nudity again.
We will immediately and permanently suspend any accounts that we identify as the original source / source of non-consensual nudity and / or if a user makes it clear that he is intentionally publishing the content in question to attack his goal.
We will do a complete review of the report whenever we receive a tweet-level report on a non-consensual nudity. If the account seems dedicated to the publication of non-consensual nudity, we will immediately suspend the entire account.
Our definition of "non-consensual nudity" is widening to include more broadly content such as pictures taken or hidden, "creep shoot" and content with hidden camera. Since the people who appear in this content often don't know that the material exists, we don't need a complaint from the lens to remove it. While we recognize that there is an entire genre of pornography dedicated to this type of content, it is almost impossible to distinguish when this content was created or not consensually. We will prefer to be possibly in error in favor of the protection of the victims and the removal of this type of content as soon as we realize it.
Unwanted sexual approaches
The pornographic content generally permitted on Twitter and difficult to know when sexual conversations and / or the exchange of sexual materials are desired. To help determine whether a conversation is consensual or not, we currently contact and take enforcement actions only if and when we receive a complaint from a participant in the conversation.
We will update the Twitter rules to make it clear that this type of behavior is unacceptable. We will continue to take enforcement measures when we receive a complaint from someone directly involved in the conversation. Once our user reporting improvements are online, we will also use the past signals of interactions (for example, blocks, dumbs, etc.) to determine if something may be undesirable and act accordingly.
Symbols and images of hatred (new)
– We are still defining the exact scope of what will be covered by this policy. In a broad sense hate symbols and images of hatred etc. will now be considered sensitive media content (similar to how we handle adult content and explicit violence).
More details will follow.
Violent groups (new)
We are still defining the exact scope of what will be covered by this policy. Generally we will take enforcement actions against organizations that use / have historically used violence as a means to promote their cause.
Further details will also follow in this case (including the deepening of the factors we will consider to identify these groups).
Tweets that glorify violence (new)
We have already taken enforcement actions against direct violent threats ("kill you"), vague and violent threats ("someone should kill you") and desires / hopes for serious bodily harm, death or illness ("I hope someone kills you"). We go ahead and we will also act against the contents that will glorify ("Praise for shooting. It is a hero!") And / or will condone violence ("Assassination
More details will follow.
We recognize that a more aggressive approach and the application of this policy will result in the removal of more content from our service. We are confident in making this decision, assuming we will only remove abusive content that violates our rules. To make this the case, our product teams and operators are investing heavily in improving our appeal process and response times for their reviews.
In addition to launching new policies, updating execution processes and improving our appeal process, we need to do a better job in explaining our policies and setting expectations for acceptable behavior on our service. In the coming weeks we will do:
updating the Twitter rules as we have already discussed (+ added in these new policies)
updating Twitter's media content policies to explain what we believe to be adult content, graphic violence and hate symbols.
the launch of a new page of the specific Assistance Center for this policy, to describe each policy in more detail, provide examples of what goes beyond the limits and define the range of options available to respond to irregularities
updating the language to people who violate our policies (what we say when accounts are blocked, suspended, appealed, etc.)
We have a lot of work ahead of us and we will certainly be addressing all of you for guidance in the coming weeks. We will do our best to keep you updated on progress.
I wish you the best, Head of Security Policy