Twitter is updating its “be nice, think twice” system that prompts users to reconsider when they’re about to tweet a “potentially harmful or offensive” reply. The upgraded feature is now better at spotting “strong language,” claims Twitter; is more aware of vocabulary that has been “reclaimed by underrepresented communities” and is used in non-harmful ways; and also now takes into account your relationship with the person you’re messaging.
Twitter says it’s made improvements over the last year to reduce cases when people might see the prompts unnecessarily. For example, its algorithms now factor in “the nature of the relationship between the author and replier” as people who know each other may be more likely to make jokes or communicate differently than strangers. The system can also “better account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways.”
The prompts are one of several updates Twitter has made to reduce bullying and harassment and spur “healthier conversations.” The company notes that tests of the feature have shown some success, with 34 percent of people who received prompts opting to reconsider their original reply. That may not sound particularly high, but Twitter notes the prompts may have had some downstream effects that resulted in fewer offensive replies in the future.




