Australia has slammed Elon Musk-owned X for failing to stem harmful content on its platform and not fully complying with a legal notice served to the platform.

The Australian eSafety Commissioner said in its latest transparency report that X has made “deep cuts” to its trust and safety teams since it was acquired in October 2022 by the tech billionaire.

Globally, X had reduced its trust and safety staff by 30 percent and 45 percent for the Asia-Pacific region. Engineers focussed on trust and safety issues globally were reduced by 80 percent, while content moderators hired by X were cut by 52 percent, according to eSafety.

X cut its public policy staff globally by 68 percent and public policy staff in the APAC region by 73 percent.

“Adequate resourcing of trust and safety functions is important to ensure online safety. Companies with low numbers of trust and safety personnel may have reduced capacity to respond to online hate, as well as other online harms,” eSafety said in its report.

Buy Me a Coffee

The result is that the burden for safety tends to fall on the user or group experiencing the abuse, rather than the platform taking responsibility for harmful content and conduct on their service, it added. In answer to a question about whether Twitter had staff dedicated to hateful conduct issues, X stated there were no full-time staff that are specifically and singularly dedicated to hateful conduct issues globally, and no specific team for this policy.

READ
Samsung Introduces 2nd Generation AI Model ‘Gauss 2’

“It (X) said that instead, a broader cross-functional team has this in scope and collaborates on a set of policies that are related to toxicity more broadly,” the report mentioned.

In its response to the legal notice, X confirmed that Twitter’s Trust and Safety Council was disbanded in December 2022 and that the company “had not replaced the Trust and Safety Council” with another advisory body… on matters relating to the safety of users, including hateful conduct”.

On the X paid subscription service called ‘Blue’, the Centre for Countering Digital Hate (CCDH) has reported that X failed to act on 99 percent of content involving ‘Twitter Blue’ accounts that CCDH considered to be hate, and reported to the service. In response to a question regarding how tweets from Twitter Blue accounts are treated, X Corp stated that no accounts are artificially or manually amplified.