Twitter on Tuesday announced to broaden its “hateful conduct” policy by including content that ‘dehumamises others” or makes someone less human.
The micro-blogging site has been developing policies for three months to address dehumanising language based on their membership in an identifiable group, even when the material doesn’t include a direct target.
"Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, religious affiliation, age, disability or serious disease).
"But there are still Tweets many people consider to be abusive, even when they do not break our rules.
"Better addressing this gap is part of our work to serve a healthy public conversation," Vijaya Gadde and Del Harvey from Twitter said in a blog post.
Many scholars have already examined the relationship between dehumanisation and violence.
Twitter is now asking its 336 million users to give feedback on this to ensure how this policy may impact different communities and cultures.
"For languages not represented on our platform, our policy team is working closely with local non-governmental organisations and policymakers to ensure their perspectives are captured," said the blog post.
The users would have time till October 9 to provide Twitter with feedback on the new policy.
"We're experimenting with a new way to write and roll-out policy and rules. Let us know what you think," tweeted CEO Jack Dorsey.