Micro-blogging site Twitter on Thursday announced that it is testing a new feature called Safety Mode that aims to reduce disruptive interactions.
Safety Mode is a feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions.
“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the tweet’s content and the relationship between the tweet author and replier,” the company said in a statement.
“Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be auto-blocked,” it added.
The company said that it is rolling out this safety feature to a small feedback group on iOS, Android, and Twitter.com, beginning with accounts that have English-language settings enabled.
For those in the feedback group, Safety Mode can be enabled through Privacy and safety under Settings.
“Authors of tweets found by our technology to be harmful or uninvited will be auto-blocked, meaning they will temporarily be unable to follow your account, see your tweets, or send you Direct Messages,” the company said.
Users can find information about the tweets flagged through Safety Mode and view the details of temporarily blocked accounts at any time. Before each Safety Mode period ends, users will receive a notification recapping this information.
“We will observe how Safety Mode is working and incorporate improvements and adjustments before bringing it to everyone on Twitter,” the company said.