home

Media Law Resource Center

Serving the Media Law Community Since 1980

Home

Deepfakes and the Law: Claims, Defenses and Strategies

By Jim Rosenfeld, Brendan Charney, Adam Rich and Alison Schary[1]

Since the early days of photography and video, content creators have manipulated images and other "factual" content to create false narratives.[2] To cite fairly recent examples, photoshopped images disseminated through social media have driven controversies like the outrage that followed an altered image purporting to show Parkland, Florida activist Emma Gonzalez ripping up the constitution (she actually ripped a bullseye target)[3]; or a photoshopped image purporting to show an adoring Marco Rubio shaking hands with President Obama (a readily-discernable mashup of the politicians' faces mounted on a stock image of a handshake)[4].

These manipulations may soon seem quaint, as a new form of artificial-intelligence technology, called "deepfakes," permits the creation of shockingly realistic fake videos, giving us even more reason not to trust our lying eyes.[5]

How will the law treat deepfakes, and protect the media from liability for legitimate uses? How should journalists and media lawyers approach this technology? This article discusses what deepfakes are (including how they have been used, and how they are detected) (Part I); delves into the main legal claims (against both creators and platforms) and defenses that may be asserted under current law (Part II); and concludes with thoughts on how journalists and media lawyers should handle the challenge of these convincing fakes (Part III).

I. WHAT IS A "DEEPFAKE"?

In December 2017, a Reddit user called "deepfakes" created convincing fake pornographic videos of celebrities by using artificial intelligence to map their faces onto actors in pornographic video clips. The term "deepfake" is an amalgam of "fake" and "deep learning," which is a type of artificial intelligence that mimics the learning process of the human brain to generate insights.[6] The word deepfakes is now widely used to refer generally to videos or audio recordings[7] that have been falsified using artificial intelligence.

A. Creation of Deepfakes

Deep learning occurs through a "neural network" of thousands or even millions of interconnected computer processing "nodes" that are modelled loosely on the human brain. The neural network is "trained" with input data that has been labelled by human programmers. To create an object recognition system, for example, programmers would feed thousands of labeled images of text characters, cars, street signs, or other objects or symbols, so that the system can learn to detect visual patterns in the images. (In fact, you may have participated in this labelling process, the last time you identified street signs, storefronts, or cars in a CAPTCHA while trying to log into a website.[8])

To make sophisticated deepfakes, a neural network that generates fake images is paired with a "generative adversarial network" – a second neural network that assesses how realistic the fake image is.[9] The iterative, adversarial process between the two neural networks produces convincing deepfakes with far fewer resources than would be required by a process staffed by human programmers and reviewers.

B. How Are Deepfakes Used?

While deepfake technology has been used most notoriously to create fake pornography featuring celebrity faces, it has a variety of applications, both disturbing and positive.

Many commentators have raised the alarm about the potential of this technology to undermine truth in news and political debates. The fear is understandable; misuses can be pernicious. In the last few years, fake images have been employed in attempts to derail American political campaigns[10] and stoke international geopolitical conflict.[11] However, as a recent article in the Verge noted, the invention of deepfake technology has not led to an explosion of misinformation campaigns, as some feared.[12] This may be in part because deepfakes are still easily detectable (as discussed below), or because much misinformation is not so much based on advances in technology, but on fundamental psychological factors such as social affiliation, network effects, and confirmation bias (i.e., our tendency to accept what is said by the people with whom we affiliate, and our bias towards confirming our existing beliefs).[13] Whatever the reason, the uses so far have been limited. The most damaging use of deepfakes to date remains non-consensual pornography (i.e., use of deepfakes to graft the faces of uninvolved women, usually celebrities, on to pornographic content), which has been weaponized for purposes of online harassment campaigns.[14]

Although the potential negative uses of deepfakes are disturbing, the technology also can be used for socially desirable purposes. This includes uses in expressive works, such as satirical or parodic depictions of public figures in fictional situations,[15] or depictions of historical figures in bio-pics or other educational, dramatic, or documentary content. Deepfake technology also could be used to create continuity in fictional works when actors die or age out of roles — building on precedent set by producers of the film Furious 7, who used CGI to create composite shots of the film's star Paul Walker after his untimely death, and the forthcoming Star Wars: Episode IX, which will feature a posthumous appearance by Carrie Fisher.[16] As with any new creative capability, there are a myriad of uses which we cannot yet imagine. Some have even proposed using deepfake technology to create therapeutic virtual-reality experiences, such as permitting a paraplegic who is unable to engage in conventional physical sex to experience virtual sex with a partner through consensual deepfake pornography.[17]

C. Detecting Deepfakes

The field of video and photography forensics has developed robust methods to detect conventional fake images, including looking for mismatched shadows and patterns, and evaluating metadata to find inconsistencies between an edited image and the time, location, or technical aspects of the original image's capture.[18] Advanced forensic examiners can even use the "fingerprint" created by imperfections in a digital camera's lens to authenticate the source of footage.[19]

Like photos manipulated with Photoshop, current deepfake technology leaves recognizable artifacts on doctored video. Crude attempts are discernable to the naked eye, while machine-learning algorithms are themselves employed to detect sophisticated deepfakes, prompting an inevitable game of deep-learning cat-and-mouse.[20] Some of these detection algorithms are publicly available, though implementing them requires some technical ability.[21] Facebook announced in September, 2018 that it is using its own proprietary technology, along with human review, to detect fake or doctored photos or videos.[22] The US Defense Advanced Research Projects Agency (DARPA) is creating a technology for "automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform."[23] This media forensics program, called "MediFor" for short, is billed to detect manipulations that raise national-security concerns, such as those that might be used for political propaganda or misinformation campaigns.

II. LEGAL AND REGULATORY RESPONSE

A. Should We Outlaw Deepfakes?

As with any rapidly developing technology, efforts to police bad actors will inevitably struggle to keep pace with technological advancements. At the same time, there is some risk that mounting alarm about the most harmful types of deepfakes – what the Verge recently called "Deepfake panic"[24] – might lead to an overcorrection that stifles First Amendment-protected expression. Such concerns have already been raised, for example, by the entertainment industry, which organized to oppose a New York bill that would punish creators of deepfakes that incorporate people's images without their consent. [25] A Disney executive warned that, if passed, the New York proposal "would interfere with the right and ability of companies like ours to tell stories about real people and events. The public has an interest in those stories, and the First Amendment protects those who tell them."[26] A proposed federal ban -- the "Malicious Deep Fake Prohibition Act of 2018"[27] -- has also drawn fire for seeking to impose potentially unconstitutional restrictions.[28]

Ultimately, new legislation may be unnecessary in light of existing civil remedies and criminal statutes like the federal Computer Fraud and Abuse Act,[29] and myriad other criminal prohibitions including those against extortion, harassment, wire fraud, identity theft or false personation.[30] Moreover, the effect of any new law is questionable given that deepfake technology will remain accessible in other jurisdictions outside the United States.[31]

B. Legal Claims and Defenses For Deepfake Publishers and Creators

While the technology behind deepfakes may be new, the legal issues are not. A plaintiff who feels injured by a deepfake could seek redress through the common content-related tort and IP claims, including defamation, false light invasion of privacy, violation of the right of publicity, copyright infringement and false endorsement under the Lanham Act.

1. Defamation and False Light

In order to prevail on a defamation claim, the plaintiff would have to prove that the deepfake constituted a false and defamatory statement of fact about him or her; that it was published; and that the publisher acted with the requisite degree of fault. Public-figure plaintiffs, or any plaintiff seeking presumed or punitive damages, must meet the high bar of proving that the defendant acted with actual malice.[32] For the similar tort of false light invasion of privacy (in states where it is recognized), the plaintiff would need to plead that the deepfake placed him or her in a false light that is "highly offensive" to the reasonable person, and acted with the requisite level of fault.

One could imagine, for example, a public figure suing over a deepfake video that puts words in their mouth that they never uttered, or places them in a situation that is embarrasing or at odds with their public reputation. Courts have dealt with low-tech versions of this same claim for years in the parody context. See, e.g., Hustler v. Falwell, 485 U.S. 46 (1988) (parody Campari advertisement in Hustler Magazine titled "Jerry Falwell talks about his first time," consisting of fake interview where Falwell describes his first sexual experience as taking place in an outhouse with his own mother, was non-actionable parody); Farah v. Esquire Magazine, 736 F.3d 528 (D.C. Cir. 2013) (parody article announcing that "birther" author and publisher were recalling their recently published book challenging President Obama's citizenship was non-actionable). In these cases, the uses were found to be protected parody because no reasonable reader would understand them as stating actual facts about their subjects.

Clearly, deepfake technology could be used in a parodic manner. But its very realism might complicate the defense. Context is key: if a video looks real to the untrained eye, is plausible in its content, and is presented as fact, it may be more difficult to argue that no reasonable person would understand it as conveying actual facts. On the other hand, a video that is created by a known source of humor or satire, or shows a public figure doing something absurd or outlandish, should not be protected any less than a more traditional parody.

Even if false, the deepfake must still be defamatory (or "highly offensive," for a false light claim). A deepfake video showing Mitch McConnell dancing like Bruno Mars might be false, but it is not defamatory. Mere allegations of "misleading editing" will not suffice if they do not create a defamatory statement of fact or implication, or a "highly offensive" and false impression. For example, in Virginia Citizens Def. League v. Couric, the plaintiffs sued Katie Couric and the makers of the documentary "Under the Gun" for editing the interview footage so that it appeared the plaintiffs sat in silence in response to Couric's question, "If there are no background checks for gun purchasers, how do you prevent felons or terrorists from purchasing a gun?"[33] In the actual interview, the plaintiffs did answer the question, albeit indirectly; the footage of the plaintiffs sitting silently, as shown in the film, came from B-roll as the filmmakers were calibrating the recording equipment. The district court dismissed the defamation claim and the Fourth Circuit affirmed, finding that it was not defamatory per se to show the plaintiffs being stumped by a specific policy question.[34]

If a deepfake involves a public figure, the actual malice doctrine will protect publishers who subjectively believed that the deepfake was real, and reported it as such. On the other hand, actual malice would not necessarily protect the creator of deepfake content, who may have the requisite subjective knowledge of falsity.

2. Right of Publicity

A person whose likeness is used without consent in a commercial manner can bring a right of publicity claim. The right of publicity is recognized by most states, either by statute or under common law, and varies by jurisdiction in its scope.[35]

The First Amendment limits application of a right of publicity claim. In New York, for example, courts have held that the right of publicity statute does not apply to "reports of newsworthy events or matters of public interest." The California Supreme Court has held that the "the right of publicity cannot, consistent with the First Amendment, be a right to control the celebrity's image by censoring disagreeable portrayals."[36] Recently, the Ninth Circuit explained that in the right of publicity context, the First Amendment "safeguards the storytellers and artists who take the raw materials of life—including the stories of real individuals, ordinary or extraordinary—and transform them into art, be it articles, books, movies, or plays." Sarver v. Chartier, 813 F.3d 891, 905 (9th Cir. 2016).

Deepfakes have (thus far) not been widely used to generate financial gain.[37] But it is conceivable that we could start seeing them used more frequently in advertising and other commercial contexts. The technology should not, however, change the legal analysis for news reports, films, video games, and other expressive works.

3. Intentional Infliction of Emotional Distress

While intentional inflection of emotional distress ("IIED") claims are often pled as an afterthought and easily dismissed, IIED could be a viable claim in the deepfake porn context, as such content could rise to the level of "extreme and outrageous." If the plaintiff is a public figure, he or she would have to satisfy the actual malice standard. As the Court held in Falwell, the actual malice standard cannot be satisfied where the work in question is parody.[38] If the deepfake is not parody, however, an intentionally false depiction of a celebrity incorporating deepfake technology could meet this standard.

4. Copyright Infringement

If a deepfake uses copyrighted material, then the copyright owner may have an infringement claim against the deepfake creator. Copyright is most likely to arise with respect to deepfakes that use video from produced content such as television shows or movies. For individual victims of deepfakes, copyright claims may be possible if the victim is the copyright owner of the photographs or videos being used. Even then, however, registration of the copyright is a prerequisite to commencing a claim for infringement.[39] For example, in a revenge porn case of a man who published nude photos of his ex-girlfriend, the woman reluctantly had to register over 100 nude photographs of herself with the copyright office so that she could bring her copyright claim.[40] In that case, the victim was able to petition for a waiver from having her photographs stored in the Library of Congress, but her name and the titles of the images appear in the public catalog.[41]

However, even if a copyright infringement claim can be brought against the creator of a deepfake, it may be difficult to overcome a fair use defense.[42] The defendant would likely argue that the deepfake is "transformative," and thus immune from liability under the fair use doctrine.[43] Many deepfakes may be considered parody, which is a category of transformative use that courts have found to be particularly compelling. For certain deepfakes, the creator may also be able to argue that the new work constitutes criticism or comment – two categories of fair use that are explicitly listed in the statute as fair uses. One commentator has suggested that fair use is a substantial barrier to finding the creator of a deepfake liable for copyright infringement, "given that deepfakes are created from the cropping and algorithmic combination of still images" and thus likely to be found transformative.[44]

5. False Endorsement under the Lanham Act

A celebrity whose likeness is used in a deepfake may also bring a false endorsement claim pursuant to Section 43(a) of the Lanham Act, which prohibits use of a person's identity or likeness to falsely imply endorsement of an unrelated business, product or service by that person.[45]

In the context of deepfakes, a Lanham claim might arise where the victim is a celebrity and the deepfake is being used in connection with the sale of commercial goods. A less prominent person would be unlikely to prevail on a Lanham Act claim. For example, in Ji v. Bose Corp., the court held that an unknown model whose photograph was used in advertising for Bose products could not sustain a false endorsement claim under the Lanham Act without evidence that the "Bose's target audience was familiar with her personality."[46] However, the plaintiff was permitted to maintain her claim for violation of her right of publicity.[47]

The First Amendment precludes application of the Lanham Act to creative works unless "the public interest in avoiding consumer confusion outweighs the public interest in free expression." Rogers v. Grimaldi, 875 F.2d 994, 999 (2d Cir. 1989) (use of "Fred and Ginger" as a film title did not state a Lanham Act or right of publicity claim). See also Twentieth Century Fox v. Empire Dist., Inc., 875 F.3d 1192 (9th Cir. 2017) (use of "Empire" title for television show, and its promotions, do not state a Lanham Act claim). As set forth in Rogers and its progeny, "the Lanham Act should not be applied to expressive works unless the use of the trademark or other identifying material has no artistic relevance to the underlying work whatsoever, or, if it has some artistic relevance, unless the trademark or other identifying material explicitly misleads as to the source or the content of the work." Twentieth Century Fox 875 F.3d at 1196 (emphasis added) (internal quotation omitted); see Rogers, 875 F.2d at 999. "[T]he level of relevance merely must be above zero." E.S.S. Entm't 2000, Inc. v. Rock Star Videos, Inc., 547 F.3d 1095, 1100 (9th Cir. 2008). "To be explicitly misleading, the defendant's work must make some affirmative statement of the plaintiff's sponsorship or endorsement, beyond the mere use of plaintiff's name or other characteristic." Novalogic, Inc. v. Activision Blizzard, 41 F. Supp.3d 885, 901 (C.D. Calif. 2013) (internal quotation omitted).

C. Potential Claims Against Platforms

The many challenges, discussed above, to holding creators liable for deepfakes may make the platforms that host and distribute deepfakes appealing defendants.

However, there may be even greater obstacles to suing platforms for deepfakes than there are for creators. The most significant barrier is Section 230 of the Communications Decency Act (the "CDA"), which immunizes platforms from most tort liability. While Section 230 expressly carves out intellectual property claims, platforms are likely to be protected from copyright claims under the DMCA safe harbor and there have been no successful lawsuits alleging that a platform was secondarily liable for false endorsement under Section 43(a) of the Lanham Act.

1. Section 230

The CDA immunizes platforms from tort liability arising from material on their platform. Section 230(c)(1) of the CDA states: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Courts have broadly interpreted the immunity to provide protection for websites from any state-law claims based on content provided by third parties, so long as the sites themselves do not materially contribute to those aspects of the content that are offensive or illegal, and it would likely preclude claims against social media platforms and other forums on which user-generated deepfakes might appear. However, there are exceptions to this immunity that leave platforms potentially liable for federal crimes and intellectual property claims.[48]

There is a possibility that platforms can be held liable for right of publicity claims under the CDA.[49] This idea has been rejected by the Ninth Circuit, which has held that "intellectual property" under Section 230 refers only to federal claims (like copyright infringement).[50] But other courts have disagreed, leaving the door open for a right of publicity claim to fit into the Section 230 immunity exception for IP claims.[51] Supporting that position is the Supreme Court's decision in Zachhini v. Scripps-Howard Broad. Co., which described the right of publicity as "closely analogous to the goals of patent and copyright law."[52]

2. Copyright – DMCA

As discussed above, copyright infringement claims for deepfakes are most likely to succeed where the deepfake incorporates entertainment content, such as television shows or movies, and where ownership of the relevant copyright is well-established. In such cases, the original content owner may serve notice under the Digital Millennium Copyright Act (the "DMCA").

The DMCA limits liability to platforms for material posted by users. This immunity is premised on the platform's compliance with the DMCA's notice and takedown procedures. Section 512(c) requires the platform to promptly take down allegedly offending content if the copyright owner provides notice that is compliant with the requirements of the statute.[53] Once such notice is given (or the platform is otherwise aware of facts or circumstances from which infringing activity is apparent), the platform must expeditiously remove or disable the content and inform the alleged infringer. If the original poster of the content provides a counter notification, the platform must inform the party who alleged infringement, and if that party does not bring a claim for copyright infringement within 14 days, the platform may restore the material in question. If the platform provider follows these procedures and certain other requirements of the statute (copyright agent; repeat infringer policy; non-interference with "standard technical measures"), it can protect itself from copyright claims arising from user-created deepfakes.

3. False Endorsement Under the Lanham Act

If a plaintiff is able to bring a false endorsement claim under Section 43(a) of the Lanham Act against the creator of a deepfake, can he also sue the platform for secondary liability? It seems unlikely, and we are unaware of any case to date finding a platform secondarily liable for false endorsement.[54]

The bar for finding secondary liability for false endorsement is high. "A contributory infringer is one 'who, with knowledge of the infringing activity, induce[s], cause[s] or materially contribute[s] to the infringing conduct of another.'"[55] With respect to deepfakes, it seems unlikely that a platform would knowingly induce or contribute to the creation of a deepfake, but if that fact pattern presented itself, it is conceivable (albeit unlikely) that the platform could be held secondarily liable for false endorsement.

III. CONTENDING WITH DEEPFAKES IN THE NEWSROOM

Where does this leave journalists – and newsroom lawyers? As noted above, there is an ever-evolving set of tools to authenticate images and videos and flag deepfakes. Journalists and lawyers will need to educate themselves about the technical tools available to verify and authenticate content – including their strengths and weaknesses. But there will never be a solely technological fix; deepfake creators will inevitably update their methods to fix the "tells" that automated systems use to flag content as fake.[56]

The traditional tools of journalism – careful sourcing, editorial judgment, and a healthy dose of skepticism – are invaluable complements to technology. When reporting based on an image or video, journalists should consider – and try to contact – the source of the content, and interrogate the process by which it was obtained and originally created. Other tips include examining the metadata of the content, slowing down the footage to look for glitches that could indicate manipulation, and reverse image searching.[57] For example, reverse image searching of video frames allowed Reuters to discovery that a video supposedly taken during the mass shooting in Christchurch, New Zealand was actually from a 2018 shooting in Florida.[58]

In addition to the source of the video, it is important to seek comment from people depicted in it, or who may have knowledge of whether the content is authentic. When dealing with public figures, a denial is not generally sufficient to demonstrate actual malice. But it is helpful to consider the content of the denial – is it just a general claim of "fake news," or can the person point to specific instances of manipulation or inconsistency with the real event? Can others who were present confirm the events or statements depicted? Of course, people captured in unflattering footage have an incentive to cast doubt on its authenticity – and may invoke the technological capacity for deepfakes in a bid for plausible deniability. When the Washington Post published the infamous Access Hollywood tape during the 2016 campaign, then-candidate Trump acknowledged its authenticity and released a video apology. But after his inauguration, he started to suggest it was not actually his voice on the video.[59]

From a legal standpoint, the actual malice doctrine provides strong protection to a publisher who reports on what it genuinely – but mistakenly – believes to be an authentic video involving a public figure. But from a reputational standpoint, being fooled by a deepfake can damage public trust in the publisher's brand. Newsrooms are training reporters and editors to spot deepfakes; the Wall Street Journal formed an internal task force, the Media Forensics Committee, that is training journalists and developing best practices to weed out manipulated content.[60] As some commentators have noted, the deluge of manipulated information online has only sharpened the need for media organizations to serve as trusted intermediaries. By combining technological tools and traditional reporting skills, journalists can guide readers through an informational hall of mirrors -- and build trusted brands and loyal readers along the way. [61]


Notes

[1] Jim Rosenfeld is a partner (New York), Brendan Charney is an associate (Los Angeles), Adam Rich is an associate (New York) and Alison Schary is an associate (Washington, D.C.) in the media group at Davis Wright Tremaine LLP. They litigate for and counsel clients in the media, technology and entertainment fields.

[2] Examples can be found at an online gallery prepared by the image authentication service Izitru at http://pth.izitru.com/1917_00_00.html

[3] https://www.washingtonpost.com/news/the-intersect/wp/2018/03/25/a-fake-photo-of-emma-gonzalez-went-viral-on-the-far-right-where-parkland-teens-are-villains/?noredirect=on&utm_term=.78c1189255b6

[4] https://www.washingtonpost.com/news/morning-mix/wp/2016/02/19/cruz-campaign-slammed-for-very-fake-looking-photoshop-image-of-obama-rubio-handshake/?noredirect=on&utm_term=.c33d8d906621

[5] For a comprehensive discussion of this topic, see Chesney and Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 California Law Review (2019, Forthcoming) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954.

[6] http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

[7] A startup called Lyrebird is developing technology that uses artificial intelligence to mimic a subject's speech pattern; the company demonstrated its technology with a fake conversation between Presidents Obama and Trump purporting to praise the company's technology. See https://www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird

[8] https://thenewswheel.com/captcha-self-driving-cars/

[9] https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html

[10] During the 2004 election, opponents of Democratic candidate John Kerry attacked his patriotism and military service in Vietnam by releasing a phony photo falsely edited to depict Kerry at a rally with actress and anti-war activist Jane Fonda. https://www.theguardian.com/world/2004/aug/06/usa.uselections2004

[11] https://observers.france24.com/en/20170713-fake-images-causes-flare-violence-west-bengal

[12] https://www.theverge.com/2019/3/5/18251736/deepfake-propaganda-misinformation-troll-video-hoax

[13] Id.

[14] https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/?utm_term=.dbaf15ddb7de

[15] https://www.buzzfeednews.com/article/davidmack/obama-fake-news-jordan-peele-psa-video-buzzfeed

[16] https://www.polygon.com/2015/10/20/9577863/furious-7-used-350-cgi-shots-of-paul-walker; https://people.com/movies/carrie-fisher-to-appear-in-star-wars-episode-ix/.

[17] https://www.menshealth.com/sex-women/a19755663/deepfakes-porn-reddit-pornhub/

[18] See, e.g., https://www.popsci.com/use-photo-forensics-to-spot-faked-images#page-9

[19] http://ws.binghamton.edu/fridrich/Research/double.pdf

[20] https://www.theverge.com/2019/3/5/18251736/deepfake-propaganda-misinformation-troll-video-hoax

[21] https://github.com/mvaleriani/Shallow

[22] https://newsroom.fb.com/news/2018/09/expanding-fact-checking/

[23] https://www.darpa.mil/program/media-forensics

[24] https://www.theverge.com/2019/3/5/18251736/deepfake-propaganda-misinformation-troll-video-hoax

[25] https://www.hollywoodreporter.com/thr-esq/disney-new-yorks-proposal-curb-pornographic-deepfakes-1119170

[26] https://www.rightofpublicityroadmap.com/sites/default/files/pdfs/disney_opposition_letters_a8155b.pdf

[27] https://www.congress.gov/bill/115th-congress/senate-bill/3805/text

[28] https://www.eff.org/deeplinks/2018/02/we-dont-need-new-laws-faked-videos-we-already-have-them

[29] 18 U.S.C. § 1030(a)(2) (2012).

[30] See, e.g., https://reason.com/2019/01/31/should-congress-pass-a-deep-fakes-law; https://www.eff.org/deeplinks/2018/02/we-dont-need-new-laws-faked-videos-we-already-have-them; Chesney & Citron, supra note 4, at 42-45.

[31] https://iapp.org/news/a/privacy-law-and-resolving-deepfakes-online/

[32] New York Times v. Sullivan, 376 U.S. 254 (1964); Gertz v. Robert Welch, 418 U.S. 323 (1974).

[33] 910 F.3d 780 (4th Cir. 2018).

[34] Id.

[35] For example, some states limit the right to those who have commercially exploited their likeness, while others permit claims by any individual, famous or not. States also differ in whether the right applies posthumously, the type of uses that it covers, and defenses available.

[36] Comedy III Productions, Inc. v. Gary Saderup, Inc., 106 Cal. Raptr.2d 126 (2001).

[37] See discussion in Chesney & Citron, supra note 4, at 35.

[38] Hustler, 485 U.S. at 56.

[39] 17 U.S.C. § 411.

[40] https://money.cnn.com/2015/04/26/technology/copyright-boobs-revenge-porn/index.html?iid=EL

[41] Id.

[42] 17 U.S.C. § 107. Section 107 states that:

"In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include—(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work."

[43] See, e.g., Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994).

[44] https://iapp.org/news/a/privacy-law-and-resolving-deepfakes-online/

[45] 15 U.S.C. § 1125(a).

[46] 538 F. Supp. 2d 349, 351 (D. Mass. 2008).

[47] Id.

[48] 47 U.C.C. § 230(e)(2).

[49] https://www.lawfareblog.com/combatting-deep-fakes-through-right-publicity

[50] Id. (citing Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102 (9th Cir. 2007).

[51] Id. (citing Universal Comms. Sys., Inc. v. Lycos, Inc., 478 F.3d 413 (1st Cir. 2007); Atlantic Recording Corp. v. Project Playlist, Inc., 603 F.Supp.2d 690 (S.D.N.Y. 2009); Doe v. Friendfinder Network, Inc., 2008 WL 803947 (D.N.H. Mar. 27, 2008)).

[52] Id.

[53] These requirements include that the party giving notice provide: identification of the copyrighted work claimed to have been infringed and information reasonably sufficient to permit the service provider to locate the material, information reasonably sufficient to permit the service provider to contact the complaining party, such as an address, telephone number and email address, a statement that the complaining party has a good-faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law, and a statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.

[54] See, e.g., Lepore v. NL Brand Holdings LLC, 2017 WL 4712633 (S.D.N.Y. 2017) (finding no secondary liability for false endorsement and noting lack of case law to the contrary).

[55] Id. (quoting Warner Bros. Entm't Inc. v. Ideal World Direct, 516 F. Supp. 2d 261, 267-68 (S.D.N.Y. 2007).

[56] https://www.wired.com/story/these-new-tricks-can-outsmart-deepfake-videosfor-now/

[57] https://www.poynter.org/fact-checking/2018/10-tips-for-verifying-viral-social-media-videos/

[58] https://digiday.com/media/reuters-created-a-deepfake-video-to-train-its-journalists-against-fake-news/

[59] https://www.nytimes.com/2017/11/28/us/politics/trump-access-hollywood-tape.html

[60] https://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes/

[61] https://www.cjr.org/tow_center/reporting-machine-reality-deepfakes-diakopoulos-journalism.php

 
Joomla Templates by JoomlaShack