home

Media Law Resource Center

Serving the Media Law Community Since 1980

Home

Hot Topics Roundtable: Internet Regulation and Free Speech

May's hot topic is the past and future of speech regulation online: the current state of play, the role of government and content providers, and how the First Amendment offers guidance (or lack thereof) in addressing these issues.

Our panelists: Eric Goldman, well-known blogger and professor at Santa Clara University School of Law; Joshua Koltun, a solo practitioner in San Francisco who has counseled numerous digital publishers; and Jeff Hermes, deputy director of the MLRC and former director of the Digital Media Law Project at Harvard University's Berkman Center for Internet & Society.

A few years ago social media platforms such as Facebook seemed to be saying there should be no censorship on their platforms, almost all speech was ok, and that no regulation, or self-regulation, was appropriate. They take a different position today. Are they just talking the talk for political reasons or has the environment changed?

Goldman: I'm not sure I agree with the predicate assumption. Even a few years ago, most social media platforms were removing lots of legally permissible content. However, since then, the major social media platforms – especially Facebook, YouTube, and Twitter – have been inundated with massive volumes of junk content that was no longer possible to ignore. In particular, the Russian disinformation campaign in the 2016 election exposed that the major social media platforms could be weaponized for anti-social purposes – which is antithetical to the goals of the social media platforms. Plus, the pressure from government, both domestically and internationally, has gotten so strong that it could not be easily dismissed.

Hermes: I would dispute the suggestion that any platform, Facebook included, has taken a "no censorship" stance with respect to user speech; to the contrary, the major platforms have long reserved to their own discretion the ability to remove material for any of a number of reasons or for no reason at all, and have routinely exercised that right. That is entirely within the platforms' own First Amendment rights, of course, which largely renders the question of externally-imposed regulation moot except with respect to speech that could be punished under the First Amendment. Despite public outrage, it is meaningless to talk about whether platforms are for or against regulation of racist or violent content, or other categories that do not fall within exceptions to the First Amendment.

This question really seems to be about whether platforms still believe that they need the protection of statutes such as Section 230 of the CDA and Section 512 of the DMCA to insulate them from liability for user-generated content that is actually unlawful. By and large, I do not see platforms retreating from their belief that these protections are essential, although we have seen some softening of that position at certain companies with respect to specific instances of egregious activity such as sex trafficking.

What has changed in recent years is the degree of public scrutiny given to platforms' attempts to stem particular patterns of undesirable behavior. The problem is that different groups disagree as to which patterns are offensive, which is why regulation in this area is fraught with danger – virtually any attempt by government to meddle with moderation decisions would constitute a content-based, if not actually viewpoint-based, regulation of speech.

Koltun: I am not sure it is fair to say that all social media platforms were taking the position that there should be no self-regulation of content. I think the common position was that each platform should be allowed to set its own standards for regulating what speech they will permit and let the consuming public pick the platform they find the most congenial. To be sure, the prevailing assumption was, and I think still is, that the consuming public in general will want the platforms to act as pretty open forums. There certainly seems to be some political momentum around the idea that "we can't just sit here, we have to do something!" And the platforms are responding to this by trying to reassure us that they are doing "something" to weed out "harmful content."

The problem is that that there really isn't any consensus at all as to what constitutes "harmful content." From a First Amendment perspective, the right of a platform to exercise editorial control over "harmful" speech is a very different situation from that of the government seeking to regulate "harmful" speech. But that doesn't make the task of drafting rules concerning "harmful" content any easier. The difficulty for social media platforms is that they wish to be PLATFORMS, not editorial voices of their own. They want to be seen as imposing some appropriate, but limited, "house rules" without appearing to be making biased or, even worse, ad hoc decisions.

Mark Zuckerberg seems to be calling for governmental or quasi-governmental regulation as opposed to self-regulation. I think the purpose is to take the onus off Facebook to decide what is "harmful," and also to set severe limits on the extent to which any content will be regulated, because a governmental or quasi-governmental entity would have far less leeway than a private entity.

Does the First Amendment outlaw any government regulation? If so, what should be done if some limits on speech on social media platforms are appropriate?

Goldman: Of course not. The First Amendment permits a wide variety of speech restrictions. Some categories of speech, such as obscenity, child pornography, and true threats, are categorically outside of the First Amendment's protection. Other categories of speech, such as commercial speech, receive less than full First Amendment protection. These are all grounds for speech regulation today despite the First Amendment. To me, the real question is whether we want to require private enterprises to decide when speech is legal and when it isn't. Private enterprises aren't likely to do as good a job at such evaluations as courts, and there is often substantial collateral damage from imposing liability on private enterprises for third-party speech.

Hermes: Yes, the First Amendment generally prevents the government from substituting a legislative judgment as to moderation of content for a platforms' own judgment. No, the First Amendment does not necessarily prevent holding a platform liable for the republication of unlawful content posted by a user (so long as the plaintiff can prove the elements of an underlying claim with respect to the platform, including any necessary fault or scienter elements) – which is why we have Section 512 of the DMCA for copyright and Section 230 for most everything else. If "limits on speech" are appropriate, that leaves you with two problematic options.

First, you could eliminate Section 230 protection for certain categories of "problem speech," restoring republication liability. This (1) presumes that the "problem speech" is actually unlawful, and (2) poses a problem because platforms often have no practical way to distinguish lawful from unlawful content. Given that we could expect at least the degree of abuse and misuse of takedown notices that we see in the narrower context of copyright under Sec. 512, the result would be the mass removal of lawful content. Even if you were to adopt a removal and restoration procedure as per Sec. 512, it would still allow the censorship of protected speech for at least some period of time.

Second, you could try to figure out what it is about social media that is producing undesirable behavior and alter the structure of the platform in response. That too has perils where the functions that generate "problem speech" are the same ones that make social media valuable. For example, the networking effects of social media allow minority voices to find one another and have an effect that they've never had before. That's great if you're talking about unfairly oppressed segments of society but not so great if you're talking about the lunatic fringe – and the latter always argues it's the former. Moreover, even if structural solutions are possible, attempts to enact those solutions through law would almost certainly fail for lack of content neutrality.

Koltun: The First Amendment severely restricts the government's ability to regulate content, but it doesn't outlaw any regulation. The precise contours of "content-neutrality" may be in flux, but that doesn't mean any and all any and all regulation would be necessarily be struck down as unconstitutional.

Certainly, the First Amendment does not generate Section 230. I suspect the courts will be more lenient in scrutinizing carveouts to Section 230 than they would be to direct regulations. That is not to say that any and all carve-outs would survive constitutional scrutiny.

Legislators seeking to rein in social media and other internet platforms may find it more fruitful to focus on the regulating the data-gathering ecosystem that is the lifeblood of social media. That is an area in which I think the Courts would give regulators an even more leeway. The countervailing privacy interests may well be seen as constitutional interests, and in any event as compelling interests.

How can social media companies monitor and police the millions of posts on their sites? Who should do it and what standards should be used?

Goldman: Social media platforms use a variety of tools to police their networks, including ex ante technology filters, ex post technology filters, human pre-publication and post-publication review, user flagging, third party blocklists, and more. These techniques all create Type I and Type II errors, so our goal should be to make sure regulation doesn't exacerbate the category errors.

Hermes: You have to take the last part of this question first, because the feasibility of content moderation depends on what you're looking for. If you have a limited purpose forum for discussion of Star Trek, it's much easier to identify content that doesn't belong on the forum than if you are trying to moderate an all-purpose forum.

As for what the rules "should" be, the answer is whatever the platform wants them to be. There is no legal reason that a platform has to set its limits to be co-extensive with the First Amendment. A smart platform will not be shy about its exercise of editorial discretion if it wants to preserve its own First Amendment rights; it needs to position itself closer to the newspaper in Tornillo (which could not be compelled by law to host third-party content) than to the shopping center in PruneYard (which could). Some platforms would prefer their role to be as invisible as possible, but owning one's First Amendment rights can help platforms avoid grief from users who object to having their content blocked: "You were in the section of our site designated as family-friendly, you posted adult-oriented content, end of story."

But this question really seems to be about the difficulty of moderating at scale. You can try to do that with technology, you can try to do that with lots of people, you can try a combined approach where algorithms flag content and people review it. It's expensive any way you go about it, and will never be perfect. Moreover, any time moderation depends on extrinsic facts, the best you can do is guess unless you're prepared to investigate (which is impossible at scale). Neither humans nor algorithms can determine whether a statement is false just by reading it.

Koltun: I am deeply skeptical as to whether it is possible. I try to imagine a manifestly wise/fair/ thoughtful person (myself) in the position of one of these poor employees seeking to deal with the firehose blast of claims that this or that content is "harmful," and having to make hundreds of multiple decisions on the fly, and I can't imagine that I would do a good job. I think it is very difficult even if one assumes that the rules defining "harmful" are appropriate and that everyone involved is given a fair opportunity to weigh in on the dispute. And I don't know that either of those assumptions actually is true in real-world situations.

Of course, the volume of problems being blasting through that firehose depends on how broadly the platform defines "harmful." But no matter how narrowly one defines it, one is always going to be accused of being biased and unfair in administering the rules.

I also think it is easy to over-estimate the effectiveness of any system of self-censorship. Imaginative people can find ways to evade any system of censorship, for example by using coded language. To try to plug those holes by making determinations as to who is using codewords and "dog whistles" becomes ever more problematic.

Indeed, in some ways such censorship may backfire. I don't doubt that social media can be a vehicle for hateful speech that is used as a justification for violence. By barring such speech, a platform will, in one sense stop "hate" from being circulated. But it may also have the unintended effect of serving to confirm a powerful and dangerous narrative, at least in some circles and subcultures. According to that narrative, there are certain truths that cannot be spoken (i.e., about the hated group). These truths cannot be spoken, not because they are false, but because shadowy cabals are preventing them from being spoken. Stamping out conspiracy theories is no easy task.

In some countries, without the burden/benefit of a First Amendment, Government is weighing in on what should be taken down. Is this a bad idea? What should social media companies do if their servers distribute their content in those countries? What will be the ramifications of a final decision in Google v. CNIL, perhaps allowing for extraterritorial reach of takedown orders?

Goldman: All governments gravitate towards censorship. This is inherent in the structure of government. The First Amendment curbs some of that tendency in the US, but in other countries, government censorship is an omnipresent reality. So giving more power to governments to decide what content they like and don't like only feeds the censorious impulses of governments.

Hermes: Even with the First Amendment, the U.S. government weighs in on what should be taken down – e.g., copyright infringement and material related to sex trafficking. The U.S. government also routinely puts soft pressure on platforms with respect to particular content, such as the Obama administration leaning on Google/YouTube to remove the "Innocence of Muslims" video. It's a good idea when the reason for removal accords with the free speech principles and standards of the nation where the takedown occurs, it's a bad idea when it doesn't.

It's the cross-border issues that are tricky, because free speech principles differ and the availability of online content doesn't map to jurisdictional borders. This has been an issue for as long as there's been a global internet, and the general answer has been to comply with local law where feasible on the theory that it's better to provide the information you can to people in a particular country than to be blocked entirely. Technically, companies have relied on geoblocking particular content based on the location of the requesting user, so that compliance with local law does not result in global censorship.

It's not a perfect solution, though, because tools such as virtual private networks can be used to circumvent geoblocking. The issue in the Google v. CNIL case stems in part from the French data protection authority's unwillingness to accept the porous geoblocking of content as a response to a "right-to-be-forgotten" takedown demand, ordering that Google delist content globally. A ruling in favor of France would raise the specter of a least-common-denominator approach to online content; in its starkest form, this presents countries with the choice of acceding to worldwide takedown demands, on the one hand, or taking an axe to international fiber cables and dividing up the internet into regional networks, on the other.

It is more likely that certain tech companies will simply choose not to do business in Europe while those that choose to enter Europe will accept the worldwide limitations that are entailed. That's bad enough, however, and hopefully the European court will see sense and reject France's position.

Koltun: I feel the same way about government action in countries without a First Amendment as I do about such efforts in my own country. Ultimately the questions of extraterritorial reach is a practical question as to where a company has assets and personnel. Perhaps at some point these difficulties will create openings for smaller US-based companies to provide better platforms for free speech, behind the shield of the First Amendment and the SPEECH Act.

5. What are the two areas where self-regulation is most needed? Will it happen in a significant way in the next two to five years?

Goldman: I don't think there's an industry-wide answer to these questions. Each service creates its own unique community, each of which poses their own unique challenges. So each service has its own highest priorities for self-regulation.

Hermes: Ongoing responsibility in the implementation of machine learning and AI-based solutions will be critical. The potential impact of AI-driven decisions on the rights and opportunities of individuals in an ever-widening range of activities makes it essential to establish norms for issues such as the types of information that are used to train these systems for particular applications, transparency about the training process, and the degree to which human judgment is permitted to override particular results. These conversations are already happening and I expect they will continue.

I would have said data privacy for my second, but we already have the GDPR and the CCPA and there are numerous privacy-related federal bills floating around in Congress. As such, it seems like any opportunity to stave off government regulation through self-imposed restrictions has passed. We have seen this in terms of tech companies shifting from arguing about whether legislation is needed to what form it should take.

Koltun: It would not surprise me if large segments of the industry adopt some sort of standardized templates that would give the users more effective control over the extent to which their data -- browsing habits, location, and so forth -- are shared. In other words, something more than the notional "consent" that is involved in the typical "click here to continue [and thereby agree to whatever it is that we want you to agree to in our lengthy privacy policy]." That would involve a sea-change in the way data is handled, but the various players may well decide to adopt some such templates out of a fear that any legislative solution would be even more devastating to their business models.

Is Section 230 under active threat, and if so, what form might that take? Will we see new legislative carve-outs for terrorism-related content, revenge porn, deepfakes or other specific problematic categories? Is a complete repeal of sec. 230 feasible?

Goldman: Section 230 is in extreme peril. Both Democrats and Republicans are criticizing it, and that suggests the potential for rare bipartisan agreement. At this point, pretty much any Section 230 reform proposal poses extreme risks that it will pass, so a lot depends on who makes the first proposal.

Hermes: There certainly seems to be greater political will to question Section 230 than we have seen previously, and on a bipartisan basis. It is more likely that we will see proposed carve-outs along the lines of FOSTA, because it is less feasible as a political and public relations matter for tech companies to mount a lobbying defense against carve-outs for specific and obvious evils. Moreover, a recent series of failed lawsuits against digital platforms for their alleged role in terror attacks could inspire legislation in much the same way that a series of failed lawsuits against Backpage.com over sex trafficking provided a political foundation for FOSTA.

I would previously have said that a total repeal or reworking of Section 230 was very unlikely due to the catastrophic economic effects that would result. Unfortunately, the dialogue around the statute has become so twisted (Sen. Cruz getting the idea from somewhere that the statute mandates content neutrality, Speaker Pelosi describing it as a gift to platforms rather than protection for users' speech, a weird recurring belief that Section 230 is the defense of internet giants when it is if anything even more critical for smaller companies) that I would not be surprised to see someone float a proposal to get rid of it entirely.

Koltun: I think a wholesale revision or repeal of section 230 is unlikely. I think some of the views commonly expressed in recent discourse actually reflect contradictory impulses. For example, Senator Cruz in his questioning of Mark Zuckerberg at the joint hearing over Cambridge Analytica pushed the notion that the beneficiaries of Section 230 need to be "neutral platforms." Of course, that is actually the exact opposite of the original legislative purpose of Section 230, which was precisely aimed at overturning the Prodigy decision and giving platforms a free hand to censor content. But we can understand him to be advocating a vision as to how section 230 might be modified in the future. There is another strain of thought suggested by Senator Wyden, one of the original authors of Section 230, which is that the purpose of Section 230 was to encourage platforms to censor harmful content, and if they aren't taking their editorial/censorship role seriously enough, maybe the legislature needs to step in and force the beneficiaries of Section 230 to do more to censor speech. These two impulses tend to cancel each other out. Needless to say the internet ecosystem has evolved considerably since the days of Prodigy, and there are a lot of powerful vested interests, not to mention vested expectations on behalf of the consuming public, that will be hard to uproot.

But if wholesale revision is unlikely, I think purportedly "modest" carveouts are likely. Such apparently benign carveouts may well have massive impacts on the ecosystem, particularly for smaller players. But even those powerful vested interests that would be impacted may well make strategic decisions to channel the call to "do something!" rather than attempt to resist it completely. The inadvisability of such carveouts is no guarantee against their passage. See SESTA.

 
Joomla Templates: by JoomlaShack