Twitter provides a better conversation environment by approaching 'trolling'

Spread the love

In March Twitter introduced a new approach to improve the tone of the public conversation on the platform. An important element that is now being addressed is the so-called “trolls.”

Troll behavior can be fun, well-intentioned and humorous. But troll behavior can also disturb and distract the public conversation on Twitter, especially in conversations and within the search function. Some of these Tweets and accounts also violate the policy of Twitter, which requires action to be taken. Another part does not violate the rules, but they are jammers in conversations between users.
To put this in the right context; less than 1% of all accounts constitute the majority of the accounts that are reported. A large part of these reported accounts do not violate the rules of Twitter. Although the accounts are not a large amount in total, these accounts do have a disproportionately large – and negative – influence on the experience of people on Twitter. The challenge is, therefore: how can Twitter proactively deal with disruptive behavior, which does not violate the rules but has a negative influence on the tone of a conversation.

A new approach

Nowadays, policy, manual assessment processes and machine learning determine how Tweets are organized and presented in for example conversations and search functions. The priority for Twitter now lies in tackling problems such as behavior that distracts and disturbs public conversations.
Therefore, new behavioral signals are integrated into how Tweets are presented, and by using new means that approach this type of behavior from a behavioral perspective, the tone of the conversation is improved without having to wait for Twitter for users who Report inappropriate behavior or potential problems.
The integration contains many new signals, most of which are not externally visible. A few examples are when an account has not confirmed its email address if the same person simultaneously registers multiple accounts, accounts that repeatedly tweet and list accounts that do not track them, or behaviors that may indicate a possibly coordinated attack. Twitter also looks at how accounts are connected to other accounts that violate the rules and how the interaction between accounts is.
The signals from an instrument in determining how conversations and the search-feed are arranged. Because this content does not violate Twitter’s policies, it remains available when the user clicks “View more answers” or chooses to view everything in the search feed. The result is that people who contribute to a healthy conversation are more visible in conversations and the search feed.

Results

The results of the first international test round show that the new approach has a positive influence. The abuse reports from the search function have decreased by 4% and the reports from interviews have been reduced by 8%. This means that fewer people see Tweets that disrupt their user experience.
But there is still a long way to go. Twitter’s approach is only part of all the work to improve the tone of the conversation and everyone’s experience. In the course of time technology and the team will make mistakes, but also learn from them. There will be false positives and things that they overlook. Twitter hopes to learn quickly and make the processes and tools smarter. Meanwhile, the company tries to be transparent about the mistakes that are being made and the progress that is being made.
The results achieved so far are encouraging. Yet Twitter realizes that this is only a step in a long process to improve the overall health of the service and the experience.

You might also like