Twitter, which was officially acquired by Elon Musk last week, has frozen access to content moderation tools for all but a handful of employees, Bloomberg reports. As the U.S. election nears, this may hamper the staff’s ability to prevent misinformation.
The news follows a string of company upheavals that have taken place since Musk took over including the firing of top executives, additional plans to lay off staff, and impending Twitter verifications with a $20 price tag.
Those familiar with the issue told Bloomberg that some employees working in Twitter’s Trust and Safety organization can no long alter or penalize accounts that break rules on misleading information, offensive posts and hate speech. High-impact violations with concerns for real-world harm are the exception.
The staff at Twitter currently uses agent tools to perform actions like suspending accounts that have breached company policy. While these offenses can automatically be flagged or caught by users, human input via the moderation tools is necessary to actually take down accounts.
Although people told Bloomberg that access to the moderation tools has been limited since last week, some staff were able to monitor policies during Brazil’s high-stakes presidential runoff election on Sunday, but in a limited capacity. They also said that Musk asked the team to review the company’s hateful conduct policy, specifically a section that penalizes users for “targeted misgendering or deadnaming of transgender individuals.”
Last week, Musk announced the formation of a “content moderation council with widely diverse viewpoints.” However, the Twitter CEO didn’t go into details on who would make up that council, how membership would be determined, or what types of aspects of “content” they would control. He also tweeted that “[no] major content decisions or account reinstatements will happen before that council convenes” and in a different post, he stated he hadn’t made “any changes to Twitter’s content moderation policies” at the time.
Bloomberg reports that the move to restrict content moderation access is part of a “broader plan to freeze Twitter’s software code to keep employees from pushing changes to the app during the transition to new ownership.” While typically access at that level is granted to hundreds of people, that access was limited to about 15 people last week.
The drastic change in content moderation could potentially affect Twitter’s Trust and Safety team’s ability to both monitor and enforce moderation policies ahead of the U.S. midterm election on Nov. 8.
On Monday, Yoel Roth, Twitter’s head of safety and integrity, addressed the increase in a surge of hate speech that saw a 1,700% spike in the use of a racist slur on the platform after news broke Musk closed the Twitter deal. “Since Saturday, we’ve been focused on addressing the surge in hateful conduct on Twitter,” wrote Roth. “We’ve made measurable progress, removing more than 1500 accounts and reducing impressions on this content to nearly zero.”
Twitter’s Trust and Safety team monitors many of the same policies former President Donald Trump violated as he used the platform to sow misinformation and spread distrust of the election result – actions that got him permanently booted from the social network. Whether or not Musk will get rid of permanent bans has not yet been confirmed. And while Musk promised not to turn Twitter into a “free-for-all hellscape,” we’re not holding our breath.