Twitter Inc has announced that it will send users a prompt whenever they reply to a tweet using “offensive or hurtful language”. The company said in a tweet, it is an effort to clean up conversations on the social media platform. Users will be told if the words in the tweet are similar to those in posts that have been reported when users hit “send” on the reply. The prompt will also show if the user would like to revise it or not.
It has been seen that the platform has been stressed to clean up hateful and abusive content which are supervised by users pointing rule-breaking tweets and by technology.
Acoording to “We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret” Sunita Saligram, Twitter’s global head of site policy for trust and safety, said in an interview with Reuters. She also added “the option is targeted at the majority of rule breakers who are not repeat offenders.”
According to the Twitter’s policies, it does not allow users to target any individual, with racist or sexist tropes, slurs, or degrading content. Recently, there are 396,000 accounts and more than 584,000 accounts under have been detected between January and June of last year, under the company’s hateful conduct policies.
The company has mentioned this experiment first of its kind for it and will start on Tuesday and last at least a few weeks. The option will be availed globally but only for English language tweets.