It’s Wednesday, October 21st.
The Senate Judiciary Committee will meet tomorrow and is very likely to recommend that the entire body considers the confirmation of Amy Coney Barrett to the Supreme Court. That could set up a final vote early next week which is very likely to pass with only two Republican Senators (Lisa Murkowski and Susan Collins) expressing their opposition to filling the seat prior to the election.
I want to express a huge thank you to everyone for participating in this pre-launch test. As part of this test, I’m experimenting with different story formats both today and Friday. Keep this in mind for the feedback form coming on Friday afternoon!
The Full Picture: A crumbling firewall
Image by Austin Distel
Giant social media platforms that have dominated the internet over the last two decades have done little to garner good-will from just about anyone. Users are worried about data privacy with mechanisms such as hyper-targeted ads that use dozens of data points to give advertisers the greatest impact per dollar spent. Startups and other innovators are fearful of seemingly anti-competitive behavior such as a lack of interoperability and questionable acquisitions.
However, your Aunt who is always posting and the latest software prodigy don’t have a seat in Congress. Instead, we are blessed with legislators who are trying to wrap their heads around how Facebook made 70 billion dollars without charging users a dime.
As bad as that is, there have been some glimmers of progress, such as this July hearing conducted by the House Judiciary committee in which Amazon, Apple, Facebook and Google had to answer for anti-competitive practices. The Department of Justice also filed a significant antitrust lawsuit against Google on Tuesday.
Yet antitrust and data privacy are only two of three big concerns in the minds of legislators and an increasingly vocal segment of the public. Censorship, or content moderation, is a tactic being employed by nearly all mainstream social media platforms. This practice is now under more scrutiny in the midst of a contentious election season, a rise in the circulation of conspiracy theories, and a pandemic which has made the need for reliable information all the more important.
To truly understand how platforms monitor content, the legal bounds that they operate within, and the ways that this area policy debate could develop we will go back as far as 1959. This is a story that involves banning books, the birth of a new industry, and The Wolf of Wall Street (yes THAT Wolf of Wall Street).
Part 1: Don’t burn the books
In 1956, Eleazor Smith, an owner of a small Los Angeles bookstore was convicted and jailed for selling the novel Sweeter Than Life, (unfortunately sold out on Amazon at this time) a title which featured a lesbian businesswoman as its main character. He was tried under a local law that made it illegal for a store owner to carry any books with “obscene or indecent” writing. He appealed this conviction on first amendment grounds and in the 1959 case Smith v. California, the Supreme Court unanimously ruled that law was unconstitutional.
In said decision, they were careful to note that the law was particularly problematic because it didn’t require an analysis of knowledge or intent on the part of the store owner. Chief Justice Hughes invokes the words of another first amendment decision (King v. Ewart) and declares that if this law was constitutional, “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” Citing those words in this context, Chief Hughes unknowingly laid the groundwork for how the Court and then the legislature would come to view content moderation on the internet, approximately 45 years before Mark Zuckerberg had an idea at Harvard.
Part 2: The wolf gets angry
If Smith v. California was the legal firewood on this subject, then two more cases in the 1990’s were the spark. In 1991, Cubby Inc. (a newsletter) sued CompuServe Inc. (think early Reddit) for defamation. They claimed that CompuServe should be responsible for defamatory information about their newsletter that was posted to a forum, even though CompuServe didn’t write or publish said content.
In their decision in favor of CompuServe, the Court labeled the company as a distributor of content and not as a publisher. They made this decision because CompuServe didn’t moderate the material like a publisher (think New York Times or Fox News) typically would.
Just four years later, the CEO of Stratton Oakmont (yes that Stratton Oakmont) Daniel Porush sued Prodigy Services because an anonymous user posted an accusation of investor fraud on one of Prodigy’s message boards. In their decision in favor of Stratton, the Court labeled Prodigy as a publisher of content because unlike CompuServe, Prodigy did engage in occasional censorship of offensive or obscene content. Thus, the Court ruled that by moderating certain posts, Prodigy assumed liability for all of their content. This was a big win for the wolf, but a seemingly massive loss for the future of the internet.
Part 3: Regulating porn….and free speech
In the same year as the Prodigy decision, Senator James Exon of Nebraska introduced the Communications Decency Act. At its core, the bill used vague language to make it illegal to show or send minors obscene content (read porn). With a lack of true understanding about the internet and the threat of appearing “anti child safety” top of mind for legislators, the bill was tacked onto the Telecommunications Act of 1996 and passed.
However, before the language of the legislation was finalized, two lawmakers who had issues with how the law could restrain free speech and innovation on the internet introduced an amendment. Chris Cox (R-CA) and Ron Wyden (D-OR) introduced section 230 to ensure that the internet would remain a place for growth and innovation.
The “26 words that created the internet” as author Jeff Kosseff notes in his book about section 230 read:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Although it’s hard to put it any more plainly than section 230 itself, this provision means that everyone from CompuServe and Prodigy to Twitter and Facebook are treated as distributors of content. They are therefore immune from liability for what you, your aunt, and hundreds of millions of other users might say on their platforms.
And thus, a story 61 years in the making reaches us today.
Part 4: Two wrongs make an angry Congress
There isn’t a single incident that one can point to as the moment in which Republican and Democratic lawmakers started to gripe about content moderation. Over the last five years, Republicans have typically cited instances of banned or shadow banned content or accounts (making it more difficult to see content), warnings placed on content, and rejected political ads. Democrats have often cited hate speech, conspiracy theories, and misinformation not being sufficiently scrubbed from the platforms.
All of these issues have come to a head with President Trump tweeting about repealing section 230 and Senator Josh Hawley introducing legislation to potentially achieve that goal. His 2019 bill proposed stripping section 230 protection until platforms submitted their content algorithms for review to determine if they were “politically neutral”. His more recent attempt the “Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act” (acronyms aren’t ALWAYS better) seeks to remove the protection from platforms that display manipulative, behavioral ads or provide data to be used for them. This could easily be interpreted as including all major platforms.
Part 5: The clarifications are coming
Federal Communications Commission (FCC) Chairman, Ajit Pai announced last week that the FCC would issue clarifications to section 230. This decision comes on the heels of a May executive order in which the President encouraged narrowing the scope of section 230 protections, diverting government dollars spent on advertising from platforms that engage in viewpoint discrimination, and engaging in additional anti-trust inquiries.
If these clarifications do indeed limit the scope of section 230, the consequences will likely reshape the way in which social media platforms operate. It’s possible they will follow the old model of CompuServe and engage in practically no moderation. It’s also possible that they will more fully embrace the role of “the publisher” and heavily vet content before it is published (so as not to expose themselves to legal liability).
Certainly, it’s also important to keep in mind that the Courts, legislature, and next President will have a significant impact on the section 230 debate and the larger free-speech context in which it exists. Ultimately, there is a great deal of complex history for a 26 word sentence that has shaped the 21st century’s public square, and there is very likely more to write before the book is closed.
Wrapping up
Read about how a federal judge dismissed the ACLU’s lawsuit regarding Betsy Devos’ (Secretary of Education) Title IX rule which dictates how schools respond to sexual misconduct.
The 2020 Presidential election is now 13 days away. Check information about your registration, early voting, mail-in voting, or election day voting here.
See you on Friday for the final edition of this week’s pre-launch test!