Vogon Today

Selected News from the Galaxy

StartMag

Why are Google, Facebook and Twitter accused of terrorism in the US?

Why are Google, Facebook and Twitter accused of terrorism in the US?

In the United States, Google, Facebook and Twitter are on trial for terrorism and this could change the rules of the Internet. Here's how and why. El Pais article

Two terrorism lawsuits against Google, Facebook and Twitter could change the rules of the Internet. The search engine – we read in the El Pais article – warns of the risk of a "dystopia".

On Tuesday and Wednesday, lawyers from Google, Facebook and Twitter appear in the US Supreme Court to defend their companies. With them, the future of the Internet has an appointment before the judges. Two hearings are scheduled, Gonzalez against Google and Twitter against Taamneh. At issue is the scope of Section 230, the rule that formed the cornerstone of the Internet as we know it today. This rule essentially empowers tech companies to moderate their user-generated content, but at the same time indemnifies them from liability for that content.

Both cases are related to terrorism and the question that arises is: are social networks like YouTube (owned by Google), Facebook and Twitter not even responsible for preventing the spread of terrorist propaganda online? The choice of these two cases suggests that the judges want to qualify the exemption from liability that the law grants to technology companies for third-party content.

The key sentence of section 230 of the Communications Decency Act reads: "No supplier or user of an interactive computer service shall be regarded as a publisher or disseminator of information provided by another supplier of information content." On this basis, the platforms are exempt from liability for the content of their users. This is a 1996 law, when Internet companies were still small and it seemed appropriate to protect them.

The standard applies to social networks such as Facebook, YouTube, Twitch or Twitter, but goes much further. Many features of Google, TripAdvisor, Yelp, Reddit, Craigslist, Apple or Microsoft are somehow dependent on the contributions of their users and this liability protection has been key to the success of their content. These companies have lined up in court to defend their position in a common front.

Nohemi Gonzalez, a 23-year-old American student, was one of 131 people killed by ISIS terrorists in a series of attacks that hit Paris on November 13, 2015, at the Bataclan and other places in the French capital. Gonzalez was killed in a restaurant where she was having dinner that day and her relatives are suing Google.

Reynaldo Gonzalez criticizes YouTube because it doesn't just play a passive role, allowing users to simply search for what to watch, but because its algorithm recommends videos based on each user's history. As a result, those who watched Islamist propaganda videos received more such content, facilitating their radicalization. Relatives of Nohemi complain that the Google group company, whose parent company is now Alphabet, has allowed the dissemination of radical propaganda videos inciting violence. The victim's family believes that Google has violated the anti-terrorism law by allowing the dissemination of such videos and inserting advertisements, sharing the proceeds.

Gonzalez was defeated in the lower courts. The question before the Supreme Court is whether the disclaimer achieves the recommendations made by the algorithm. In its latest memo, Google argues that algorithms are the only way to organize the huge amount of information that is poured into the web every day. “Sorting and grouping videos is the quintessence of editing. If the organization removes the liability shield, the company argues, there will be no way to save “the search recommendations and other basic software tools that organize a flood of websites, videos, comments, messages, product listings, files and other information that would otherwise be impossible to navigate.

THE RISK OF “DYSTOPIA”

According to Google, if the company is held liable, the Internet "would become a dystopia in which providers would face legal pressure to censor any objectionable content." “Some might adjust, others might try to avoid accountability by turning a blind eye and letting everything get published, even if it's questionable. This court shouldn't undermine a fundamental element of the modern Internet,” he concludes.

The other case under consideration this week, in this case Wednesday, Twitter v. Taamneh, is not about the algorithm's recommendations, but generally about the possibility of suing social networks for alleged complicity in an act of terrorism, for hosting user content that generally expresses support for the group that sparked the violence, even if it does not refer to a specific attack.

The lawsuit concerns the terrorist attack on a nightclub in Istanbul that killed 39 people during a New Year's Eve party in 2017. Although the case is named after the social network owned by Elon Musk, in addition to Twitter, Google and Facebook are also parties to cause. In this case, the judges ruled against the tech companies, which are the ones that appealed to the Supreme Court.

Several Supreme Court justices, including conservatives Clarence Thomas and Samuel Alito, have already expressed interest in hearing cases about Internet content moderation. Oral debates on Tuesday will give an indication of their positions, although they will have until the end of June to comment. The two rulings and the accompanying doctrine could have a huge impact and pave the way for an avalanche of lawsuits if they blow a crack in this traditional protection.

Tech companies have long been under fire from political parties. Republicans accuse them of progressive censorship. Democrats, led by President Joe Biden, criticize the shield that exempts them from liability when they spread hate speech or misinformation. Last month, Biden published an op-ed in the Wall Street Journal , a conservative-leaning business newspaper, in which he called on Republicans and Democrats to "unite against the abuses of big tech." And he clarified his position in the debate on section 230, which he called for reform: “We need Big Tech to be held accountable for the content they disseminate and the algorithms they use,” he wrote.

Now the liability shield is at stake, one of the two big gains that tech companies have had. The other is also at stake, the power to decide with their moderation policies what to publish and what not. Florida and Texas have passed laws preventing platforms from refusing to broadcast certain political content.

Content, on the other hand, isn't the only battlefront. Big tech companies are under increased regulatory, tax and competition scrutiny, with incidents ranging from the Justice Department's lawsuit against Google for abuse of dominant position, to Microsoft's dispute of the acquisition of Activision or the lawsuit filed by some states against social networks for contributing to the mental health crisis of young people.

In principle, Section 230 does not concern intellectual property rights and is not a license to infringe, even though in practice these social networks have built their success on systematic copyright infringement. Millions of photos and videos to which users have no rights are released every day with near impunity. In practice, only extreme cases of piracy of high-value content, such as sports broadcasts and first-run movies, are prosecuted.

(Excerpt from the foreign press review by eprcomunicazione )


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/perche-google-facebook-e-twitter-sono-accusate-di-terrorismo-negli-usa/ on Sun, 26 Feb 2023 06:43:31 +0000.