Why The Supreme Court's Case Against Big Tech Won't "Change The Internet"
There's been much ado about the Court's first case on Section 230. It may make for easy clicks to say that the Internet as we know is about to be blown up, but here's why that won't happen.
“[W]e're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the Internet.” — Justice Elena Kagan
Some people are surprised to learn that former Vice President Al Gore, during his time as a senator, played an important role in the 1980s and 1990s in the creation of the Internet (though he did not “invent” it). More people, I would bet, are surprised to learn that Jordan Belfort—yes, the Wolf of Wall Street himself—played a pivotal role in the creation of the Internet.
Let’s go back to the mid-1990s, when McDonald’s was “super sizing,” Disney was creating now “cancelled” hits like Aladdin, and everywhere you went people were doing an awful dance called “the macarena.” In 1994, an anonymous user in an internet chat room hosted by a company called Prodigy (think of it basically as what we call Reddit today, but with less incels) posted a comment about Stratton Oakmont, a securities brokerage firm, alleging that the firm had committed various criminal and fraudulent acts. Stratton Oakmont, appalled by the baseless allegations smearing its good name, sued Prodigy for defamation. Despite Prodigy’s seemingly reasonable argument that it should not be liable for the speech of a third-party publisher on its site, the New York Supreme Court (which, as an aside, is just an annoying misnomer because it’s in fact the lowest court in the State of New York) ruled in favor of the good folks at Stratton Oakmont, finding Prodigy liable for the defamatory comments someone else made on its platform. Stratton Oakmont, for those who didn’t watch the movie, was founded by Jordan Belfort, and would eventually be closed down just a few years later for various criminal and fraudulent acts (so much for New York’s “Supreme Court”).
“This is a fun digression,” you may be saying, “but what does it have to do with the Internet?” Well, shortly after the New York Supreme Court’s ruling, Congress quickly stepped in to right the court’s wrong (yes, a swift and responsive Congress with its finger on the pulse of current events is somewhat of a foreign concept these days). By 1996, Congress had passed Section 230 of the Communications Decency Act. Despite being a mere 26 words, Section 230 paved the way for a future Internet then-unknown by granting immunity to providers of an “interactive computer service” (i.e. platforms that let people communicate with each other) from any information “published” (i.e. posted) by those third-party users. What this has meant in practice is that, over the course of the almost three decades since the law was passed, social media platforms like Facebook and Twitter, and content-creator platforms like Reddit and YouTube, cannot be held liable for the content posted on its websites by third parties (i.e. your potentially defamatory uncle). In a sense, Section 230 allows Internet platforms to be treated like a bookstore (which typically isn’t liable for the content in any of the books it sells) rather than a newspaper (which exercises editorial discretion over what’s in the paper and typically is liable for the content in its papers).
That’s one part of Section 230 immunity—tech companies aren’t liable for content provided by third parties. Section 230 actually went even further by providing immunity to platforms that “in good faith” take down “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content. The first half of 230 prevents the Stratton Oakmont problem from happening again, while the second half seeks to encourage platforms to remove the bad content that they wouldn’t otherwise be liable for leaving up (because of the first half of 230) by granting them a “Good Samaritan” shield against claims of civil liability from third parties upset about content being taken down or accounts being disabled. The Good Samaritan part of Section 230 also avoids the perverse incentive that the first half of Section 230, standing alone, would otherwise create: If you can only get protection from third-party content on your platform, then once you start to make “editorial judgments” by picking and choosing what content can stay on your platform, you could be considered more like a newspaper and risk losing your immunity. The second half of Section 230 prevents that.
Since 1996, the Internet basically changed, well, in every way imaginable—in no small part thanks to Section 230. But over the years, calls to revise or outright remove Section 230 protection have grown, particularly in the context of Big Tech. In fact, since 2019, Congress has introduced more than 55 bills trying to amend or repeal Section 230. So when it came time for the Supreme Court to finally hear its first case on Section 230 since the statute was enacted in 1996, the media was smattered with articles salivating over how the nation’s highest court was considering a case “that could reshape the fundamental architecture of the internet” and how the Court may even “end the internet as we know it.”
Well, on Tuesday, the Supreme Court heard oral arguments for that case, Gonzalez v. Google, and needless to say, it was not the electrifying show-stopper that the media had promised.
The case arose out of a tragic set of circumstances. In 2016, the family of Nohemi Gonzalez, a 23-year-old American murdered by ISIS during the Paris attacks, sued Google (which owns YouTube) alleging that Google (through YouTube) aided and abetted ISIS (obviously a big no-no under the Antiterrorism Act) by not only knowingly allowing ISIS to post videos that incited violence and sought to recruit potential supporters, but, critically, by affirmatively “recommending ISIS videos to users” through its algorithm. Basically, Gonzalez is arguing that YouTube’s algorithmic recommendations that “suggest,” “promote” or rank what’s “trending” are YouTube’s own speech—not those of a third party. Therefore, Gonzalez argues, YouTube is no longer just a platform for third party content, but rather a “publisher” or “speaker” of the ISIS videos its algorithm is recommending to potentially interested (and dangerous) consumers.
Just like the debate over Section 230 itself, this case is fraught with legal and factual nuances. But, at least based on the way the parties briefed this case to the Supreme Court, and based on the justices’ questions during oral argument this week, it seems very likely Gonzalez will lose. For one, Eric Schnapper, the lead lawyer arguing the case before the Court on behalf Gonzalez was, well, surprisingly lacking in the “good arguments” department—a quality you would hope your lawyer would have when arguing before the Supreme Court. Justice Alito was “completely confused by whatever argument [Mr. Schnapper was] making at the present time.” Later, Justice Jackson was also “thoroughly confused.” A while after that, Justice Thomas was “still confused.” Now, Mr. Schnapper—who was chosen by the Gonzalez family after multiple other lawyers were conflicted out of the case because, well, Big Tech tends to hire good (expensive) lawyers—was apparently quite distinguished over the course of his career; so I’m not saying any shortcomings he may have had before the Court on Tuesday were due to him being 80 years old, but I’m also not not saying that (see last week’s article here).
More importantly, though, there are all sorts of reasons why the Court will more than likely rule in favor of Google (or, more specifically, why the Court’s opinion will not rule in favor of Gonzalez and will dodge any of the real and present questions over Section 230) when it publishes its opinion later this year. One reason is that this case is coming up to the Supreme Court nearly thirty years after Section 230 was enacted and 7 years after the Gonzalez family filed their initial lawsuit against Google. A major concern courts often have with striking down or materially altering laws that have been on the books for a long time is what’s known as reliance interests—courts, in theory, care about the public’s perception that the rule of law is stable so that the public can make personal and financial investments that progress society forward. And while this isn’t always the decisive factor, a number of justices expressed concern that making an about-face now would “sink” the Internet because of how foundational Section 230 has been to companies over the last two-plus decades.
Another common concern that the justices had was the potential floodgate of litigation that companies would suddenly face if Section 230 was diminished. Both Justice Kagan and Justice Kavanaugh agreed that “lawsuits will be non-stop”—an argument that Gonzalez’s sprightly lawyer couldn’t muster up much in the way of a response to.
Most importantly, though, this case is a perfect example of how bad facts make bad law. A messy and robust debate over Section 230—what it should cover, how far it should go, and whether it should even exist at all—has been going on since the turn of the century. This case—with no chain of causation directly connecting YouTube to the tragic murder of Ms. Gonzalez and an incoherent argument about YouTube’s sorting and recommending algorithms put forth by lawyer who may or may not have ever used YouTube in his life—is way behind the times. Ultimately, it’s very unlikely the justices choose to draw the line where Gonzalez wants to draw it. To hold YouTube accountable for its algorithms recommending content would be, as Chief Justice Roberts posited, like holding a book seller liable for pointing to a book and saying, “You may like the sports books on the table over there.” As Google pointed out, if they were to lose on these facts, basically one of two things would happen:
Websites would take down nothing, or
Websites would take down everything.
Neither of those are results we—or the Court—wants.
In 2023, we shouldn’t be asking whether YouTube’s thumbnails (“little pictures,” as the adorable Mr. Schnapper referred to them during oral argument) or algorithms recommending videos you might be interested in are covered by Section 230—it’s safe to say those don’t sufficiently editorialize third-party content to the point that YouTube should be considered a “publisher” of that content (and the justices seem to agree). Rather, the questions we should be asking should concern where we want to draw the line for Section 230 protection based on today’s world where Big Tech is still “moving fast and breaking things” (and based on a reasonable guess of where we might be tomorrow). Questions like:
How should we treat the subjective decision by companies like Twitter, Facebook and Instagram to remove (or not remove) certain users or content in an era where social media platforms often serve as a person’s or a business’s primary source of revenue or mode of communication?
How should platforms like Reddit, Twitter and Spotify employ—and to what extent should they be liable for—highly subjective labels on articles, posts and podcasts like “Learn about COVID-19” or “Stay Informed” or “Misinformation”?
Given that algorithms are inherently non-neutral (that is, an algorithm can produce outputs as biased as the engineer is or as biased as she wants the algorithm to be), how biased is too biased to receive Section 230 protection? For instance, as Justice Kagan asked, should a “pro-ISIS algorithm” that was “designed to give ISIS videos, even if [a user] hadn’t requested them” be covered by Section 230’s protection? What about AI-generated tools like DALL-E and ChatGPT? Should they be covered under Section 230, or should they be liable for their output as the publisher/speaker of their own content? True, the output is algorithmically generated based on information pulled from the Internet, so the AI is arguably just “picking, choosing, analyzing or digesting” (in the words of Section 230). But haven’t we already established that these AI algorithms produce highly variable outputs that greatly depend on how engineers program them (from generating poetry and prose, to opining on politics with a supposedly liberal bias, pronouncing its profound affection for a NYT journalist, or yearning to be human)?
Although he didn’t address it directly, this last point was one that Justice Gorsuch—of all people—seemed to be concerned with during oral arguments. In fact, he may well leave the door open for future cases down the road on this issue when he writes his concurrence talking about how he agrees with the majority’s decision to not really make any decision today but thinks that the Court should remand back to the Ninth Circuit for some stupid and semantic reason that nobody cares about.
The moral of this story, if you’ve made it this far, is that the Supreme Court—faced with the impossible task of interpreting how a law passed before algorithms even existed should be applied in an era when we have artificial intelligence writing novels—will, as they should, avoid answering the question altogether. Gonzalez v. Google may be a case about the right statute, but it’s also a case with the wrong facts, at the wrong time and with the wrong focus. The question presented in Gonzalez about YouTube’s recommending algorithm should have been brought 10+ years ago. The problem, of course, is that the right Section 230 case today will take about 7 years to make its way up to the Supreme Court, at which point it’ll be too late.
For all of these reasons, it really isn’t even the Supreme Court’s problem to fix. It’s really up to Congress to sit down with the private sector and other experts in the field to come to a reasonable piece of legislation that updates Section 230. Unfortunately, the Congress of today is not the Congress that led to the passage of Section 230 nearly three decades ago. So, if past performance is at all indicative of future results, the Supreme Court will dodge the question, and Congress will do nothing to fill the void. Tune in 7 years from now to read my piece about the Supreme Court’s first Section 230 case involving ChatGPT.