In April, Twitch announced it would start banning users for behaviour away from its site. The move by Amazon Inc’s live-streaming platform involved hiring a law firm to conduct investigations into users ‘misconduct, a new twist in the latest prominent example of tech companies acting on “off-service” behaviour.
How platforms enforce against activities conducted not just on their services, but on other sites and offline, is often only described vaguely in their rules.
But as lawmakers and researchers examine tech’s relationship with real-world violence or harm, this moderation is gaining attention. While some groups have praised platforms for being proactive in protecting users, others criticise them for infringing on civil liberties.
“This isn’t content moderation, this is conduct moderation,” said Corynne McSherry, legal director at the digital rights group Electronic Frontier Foundation, who said she was concerned about platforms that struggle to effectively moderate content on their own sites extending their reach.
In interviews, platform policy chiefs described how they drew different lines around off-service actions that could impact their sites, acknowledging a minefield of challenges.
“Our team is looking across the web at a number of different platforms and channels where we know that our creators have a presence…to understand as best as possible the activities that they’re engaging in there,” said Laurent Crenshaw, policy head at Patreon, a site where fans pay subscriptions for creators ‘content.
Facebook Inc’s rules ban users they deem dangerous, including those involved in terrorist activity, organised hate or criminal groups, convicted sex offenders and mass murderers. People who have murdered one person are mostly allowed, a spokesperson said, due to the crime’s volume.
In 2020, Facebook expanded the list to include “militarised social movements” and “violence-inducing conspiracy networks” like QAnon.
Twitch’s new rules say it may ban users for “deliberately acting as an accomplice to non-consensual sexual activities” or actions that would “directly and explicitly compromise the physical safety of the Twitch community,” categories which a spokesperson said were intentionally broad.
Twitch’s change in policy largely stemmed from the gaming industry’s #MeToo moment in summer 2020 when the site saw harassment at real-life gaming events and on sites like Twitter and Discord, Chief Operating Officer Sara Clemens told Reuters.
Looking beyond their own sites has helped companies remove extremists and others who have “learned the hairline cracks” in site rules to stay online, said Dave Sifry, vice president of the Anti-Defamation League’s Center for Technology and Society, which has pushed for major platforms to incorporate this behaviour into decisions.
Self-publishing site Medium established off-service behaviour rules in 2018, after realising attendees of the August 2017 white nationalist rally in Charlottesville who had not broken rules on specific sites appeared to be “bad actors on the internet in general,” it said.
Last summer’s protests over the murder of George Floyd prompted Snap Inc to talk publicly about off-platform rules: CEO Evan Spiegel announced Snapchat would not promote accounts of people who incite racial violence, including off the app.
In December 2020, TikTok updated its community guidelines to say it would use information available on other sites and offline in its decisions, a change that a spokesperson said helped it act against militia groups and violent extremists.
Notably, in 2021, sites like Facebook, Twitter Inc and Twitch took into account former United States President Donald Trump’s off-service actions that led to his supporters storming the US Capitol on 6 January when they banned him.
From murder to money laundering
Tech companies differ in approaches to off-platform behaviour and how they apply their rules can be opaque and inconsistent, say researchers and rights groups.
Twitter, a site where white nationalists like Richard Spencer continue to operate, focuses its off-service rules on violent organisations, global director of public policy strategy and development Nick Pickles said in an interview.
Other platforms described specific red-flag activities: Pinterest, which took a hard-line approach to health misinformation, might remove someone who spreads false claims outside the platform, policy head Sarah Bromma said.
Patreon’s Crenshaw said while the subscription site wanted to support rehabilitated offenders, it might prohibit or have restrictions around convicted money launderers or embezzlers using its platform to raise money.
Sites also diverge on whether to ban users solely far off-service activity or if on-site content has to be linked to the offense.
Alphabet Inc’s YouTube says it requires users ‘content to be closely linked to a real-world offense, but it may remove users’ ability to make money from their channel based on off-service behaviour. It recently did this to beauty influencer James Charles for allegedly sending sexually explicit messages to minors.
Charles’ representatives did not respond to requests for comment.
In a statement posted on Twitter in April, he said he had taken accountability for conversations with individuals who he said he thought were over 18 and said his legal team was taking action against people who spread misinformation.
Deciding which real-life actions or allegations require online punishments is a thorny area, say online law and privacy experts.
Linking the activity of users across multiple sites is also difficult for reasons including data privacy and the ability to attribute actions to individuals with any measure of certainty, say experts.
But that has not deterred many companies from expanding the practice.
Twitch’s Clemens said the site was initially focusing on violence and sexual exploitation, but it planned to add other off-site activities to the list: “It’s incremental by design,” she said.