Opening salvo seeks information on Meta, Google, Twitter & TikTok’s censorship and content moderation practices
Texas Insider Report: (WASHINGTON, D.C.) – Senator Ted Cruz (R-Texas), Ranking Member of the Senate Committee on Commerce, Science & Transportation and member of the Senate Judiciary Committee, today sent a letter to social media companies Meta, Google, Twitter and TikTok, launching an oversight investigation into these companies’ use of recommendation algorithms and their reported use of “blacklists,” “de-emphasizing,” and other means of “reduced distribution” of content from users, including many conservatives.
As part of his investigation, Sen. Cruz is asking for Meta, Google, Twitter, and TikTok to provide information and documents regarding:
- the scope of these companies’ content recommendation systems,
- the effect on distribution of content,
- manual intervention into the recommendations process,
- how political speech is treated, and
- what protocols for transparency and due process currently exist regarding these algorithms.
Senator Cruz lsaid there are countless examples of content moderation that has explicitly targeted right-leaning – or simply not-mainstream – views:
- Facebook employees routinely suppressed news stories of interest to conservatives,
- Google demonetized The Federalist because of comments posted by third-party users, and
- Twitter labelled factually accurate content about COVID as “misleading”.
“As you are well aware, social media companies rely on algorithms to not only moderate content, but also to surface personalized recommendations to users. Recommendation systems play an increasingly ubiquitous role in selecting content for individual consumption, including by promoting some content, using product design elements to prominently display recommendations, and downranking or filtering disfavored content and accounts[…]
“The design of these systems is especially important in light of the Gonzalez v. Google LLC case before the U.S. Supreme Court this term, which concerns whether Section 230 immunizes platforms when they make targeted recommendations of third-party information.
“Recommendation systems are separate and distinct from algorithms that rank or otherwise organize content that a user is already following or subscribed to. Taken as a whole, these systems have an outsized impact — whether positive or negative — on the reach of content and accounts and, by extension, speech[…]
“At their best, recommendations help users discover interesting or relevant content that they might not otherwise find on a platform. However, recommendation systems can also fuel platform addiction by feeding users an essentially infinite stream of content. This can be especially dangerous when recommendations make it easier for vulnerable users, especially teenagers, to find objectively harmful content, such as content that promotes eating disorders and self-harm[…]
“In addition to my concerns about the addictive nature of these systems, I am equally concerned with how censorship within recommendations impacts the distribution of speech online. In a world where seven out of ten Americans receive their political news from social media, the manner in which content is filtered through recommendation systems has an undeniable effect on what Americans see, think, and ultimately believe.”
- Read the full text of Sen. Cruz’s letter here.
- Many Americans are rightly concerned about Big Tech’s pervasive deployment of viewpoint censorship online. As the technology has evolved, so too has the arsenal of tools by which social media companies can conduct censorship.
- In addition to deleting content and accounts, companies like Meta, Google, Twitter, and TikTok also do things like:
- The suppression of speech that happens across the world’s largest social media platforms is breathtaking in its scope, near-uniformity, and sheer scale. The “Trust and Safety” apparatuses at these companies were originally brought in to tackle truly harmful and dangerous content.
- Indeed, the goal of permitting some level of content moderation to keep users safe is part of the reason that Congress in 1996 passed legislation to provide a safe harbor from civil liability for “good faith” actions to restrict access to content that is “obscene, lascivious, filthy, excessively violent, harassing, or otherwise objectionable”.
- Notably, nowhere does Section 230 provide a good faith carve-out for Trust and Safety teams to censor opinion. However, the moderation that “Trust and Safety” teams do today strays far outside the good faith boundaries originally prescribed in Section 230.
Previously, Sen. Cruz joined an amicus brief in the Gonzalez v. Google case arguing that courts have incorrectly expanded the scope of Section 230 immunity from civil liability, allowing Big Tech companies to escape scrutiny for their targeted recommendations.