ASSOCIATED PRESS
Hana Callaghan is the Director of the Government Ethics Program at the Markkula Center for Applied Ethics and is the author of, “Campaign Ethics, A Field Guide.” Views are her own.
Recently Facebook came under fire for not removing a deceptive video that had been altered to make Speaker of the House Nancy Pelosi appear to be drunk. While I agree that the video was disgusting, disrespectful, and misleading, I also agree with Facebook’s decision not to remove it from its website.
The video was posted by a U.S. citizen who slowed down actual footage of Pelosi to make her words appear to be slurred. Presumably he did this because he disapproves of the speaker’s politics. Note that this wasn’t the case of a foreign government illegally trying to meddle in our elections with misinformation. It was also not the case of content that posed a clear and present danger of imminent harm to others. Nor was this the case of original content posted by Facebook. This was the case of a private individual using Facebook to engage in political speech about an elected official.
Political speech is the most protected type of speech under the First Amendment of our Constitution. I acknowledge that the Constitution does not regulate the conduct of a private corporation such as Facebook. However, the values underlying the First Amendment can inform the company’s ethical decision making as to whether the clip should be removed.
In the historic case, New York Times Co. v Sullivan, 376 US 254 (1964) Justice Brennan noted that we as a nation have a “profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide open, and that may include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.” Sullivan at 270. Quoting Justice Learned Hand, Brennan said that, “Right conclusions are more likely to be gathered out of a multitude of tongues than through any kind of authoritarian selection.” Id. Citing Cantwell v. Connecticut, 310 US 296 (1940). Brennan also pointed out that erroneous statements are inevitable in free debate:
In the realm of religious faith and in that of political belief, sharp differences arise. In both fields, the tenets of one man may seem the rankest error to his neighbor. To persuade others to his own point of view, the pleader, as we know, at times resorts to exaggeration, to vilification of men who have been, or are prominent in church or state, and even to false statement. But the people of this nation have ordained, in the light of history, that in spite of the probability of excesses and abuses, these liberties are, in the long view, essential to enlightened opinion and right conduct on the part of the citizens of a democracy. Sullivan at 271.
In his concurring opinion, Justice Black opined:
Representative Democracy ceases to exist the moment that the public functionaries are by any means absolved from their responsibility to their constituents, and this happens whenever the constituent can be restrained in any manner from speaking, writing, or publishing his opinions upon any public measure, or upon the conduct of those who may advise or execute it. Sullivan at 297.
There are those who argue that Justices Brennan and Black were not rendering opinions in the time of social media and the 24-hour news cycle in which we now find ourselves. They may argue that the amount of disinformation is so overwhelming that it poses an even greater threat to our democracy if we allow false political speech to live on platforms such as Facebook than to allow for the free flow of ideas. The problem is, do we want corporations to be the arbiters of what is truthful and what is false in the context of political speech? If we don’t allow the government to do it under the Constitution, should we allow a corporate entity that is not accountable to us to assume that responsibility?
Facebook clearly does not want to take on that role. The company responded to the problem by engaging third party fact checkers to determine whether the video had been doctored. When it was determined that the video was deceptive, they demoted the circulation of the content and posted a warning that third party fact checkers had found the content misleading. There was also a link providing additional information.
I’m not convinced that demoting the content is the answer, because what is that if not the partial censorship of ideas? I believe the answer lies in educating the public about the inherent risks of believing anything they see or hear on social media. Caveat Emptor—Buyer Beware! Perhaps THAT warning needs to be in red capital letters on every Facebook page.
With regard to the types of disinformation problems posed by the Pelosi video, Robert Chesney, Danielle Citron, and Qunita Jurecic on Lawfareblog.com suggest a process where if content is flagged as deceptive, “[A]nyone who clicks on the video might first be presented with a click-through screen directly stating that the platform itself has determined that the video has been meaningfully altered, and asking the user to acknowledge reading that statement before the video can be watched.”
There was a lot of healthy discussion in the wake of the doctored Pelosi video. Instead of censoring deceptive political speech, let’s counter it as was done here with a deluge of content proclaiming the truth. As for Speaker Pelosi, she could, if she wanted to, defend her reputation in the courts by suing the original poster of the video for defamation. As a public figure, in order to be successful in a defamation lawsuit, she must prove malice—that the speaker knew that the video was false and posted it anyway. In as much as the poster in this case created the deceptive video in the first place, malice should not be hard to prove. I think it highly unlikely, however, that the speaker will pursue this legal avenue. You can’t rise to the levels of power as she has done, without developing a very thick skin.