Technology Recent Event 137 Harv. L. Rev. 1284

Voluntary Commitments from Leading Artificial Intelligence Companies on July 21, 2023

Tech Companies Agree to Develop Mechanisms for Identifying AI-Generated Works.


Download

Convening at the White House last July, seven leading artificial intelligence (AI) companies made a series of voluntary commitments to “move toward safe, secure, and transparent development of AI technology.”1 Among the eight commitments was a promise to invest in developing mechanisms, like watermarking, to label AI-generated content.2 While the White House categorized this commitment as a consumer protection measure,3 AI companies could stand to benefit too. If AI companies commit to watermarking (or are required to watermark) AI-generated works, they may argue that they ought to also receive the copyrights to those outputs by default. Although it may not have been the White House’s explicit intention, this copyright framework could be a good thing. If such a scheme were implemented, the value of those copyrights could, in turn, incentivize companies to continue developing and maintaining effective watermarking tools. In this way, the White House commitment — if taken seriously — could accomplish a subtle trade: exchanging the economic value of copyrights for accurate and reliable AI identification.

The commitments are part of the Biden-Harris Administration’s ini­tiative to “seize the tremendous promise and manage the risks” of AI,4 a novel technology with broad ramifications. The commitments center on three principles of safety, security, and trust,5 and reflect an ever-growing interest in the governance of artificial intelligence.6 In its statement announcing the commitments, the Biden-Harris Administration also highlighted its broader approach to safe and responsible AI development.7 The statement described a variety of meetings that the President and Vice President had convened with AI companies, researchers, and stakeholders; noted the publication of a framework for AI rights;8 and detailed a recent Executive Order9 that addresses algorithmic bias in technologies including AI.10 These initiatives suggest a swing toward a proactive regulatory response to AI that’s unlike the more reactionary measures that have been directed toward other digital-era developments, like social media.11

However, in part due to the disconnect between fast-paced technical development and slower-paced government action,12 early attempts to regulate AI have thus far failed to bring down the hammer, at least in the United States.13 The new White House commitments have already received criticisms to this effect.14 Commentators have critiqued the commitments as vague, “sensible-sounding pledge[s] with lots of wiggle room”15 — pledges, in other words, that don’t actually require meaningful action from the companies. The commitments are not “backed by the force of law” and have no accompanying enforcement mechanism.16 The lack of accountability metrics also effectively takes the pressure off companies to solve difficult technical challenges, like detecting AI-generated outputs after they’re released to the public.17 These critiques — and the fact that many of the commitments are only iterations of precautions already taken by AI companies18 — suggest that the White House commitments ultimately lack teeth.

However, while the related White House press release does not offer many specific solutions on how to achieve the commitments, the fifth commitment — to identify AI-generated content — is a notable exception: it specifies watermarking, in particular, as a potential identification mechanism.19 Unlike tools that allow viewers to discern whether something was AI-generated after it’s been created and shared,20 watermarking occurs at the point of generation — that is, the AI companies themselves, rather than downstream viewers, take responsibility for authentication. Watermarking is a concept initially found in copyright law and is particularly important in the digital sphere.21 It acts as a type of “copyright management information,” which can be used to identify a copyrighted work,22 and under the Digital Millennium Copyright Act,23 removing or altering a watermark without the copyright owner’s consent is prohibited.24

Given this connection between watermarking and copyright, the commitment to develop technical mechanisms to label AI-generated content may open the door for AI firms to secure economic value through copyrights, which could have a meaningful and positive effect. Legally, who owns AI-generated works is an unsettled question.25 But watermarks, as a form of copyright management information, are a typical sign of copyright ownership.26 The established connection between watermarking and copyright, backed by momentum from the White House commitment, could lay the foundation for a legal regime in which AI companies retain ownership of copyrights to AI-generated works, at least as a default matter. Clarifying AI ownership could be beneficial: it may provide a more administrable framework for judicial oversight, incentivize the implementation of effective identification mechanisms for AI-generated works, and lay the groundwork for more robust regulatory schemes in the future.

The question of who will own the copyrights to AI-generated content remains open to the courts, but commitments to add watermarks could impact the debate because watermarks are already understood as a type of copyright management information. An intuitive inference is that if watermarks symbolize ownership and if companies must authenticate and watermark content at the time it’s generated, then the companies have a claim to copyright ownership for that content. In fact, if the courts already associate watermarking with copyright ownership, it would be unusual to award copyright ownership to someone other than the entity responsible for creating the watermark, especially given the associated research and development costs of watermarking technology. To the extent AI-generated works are copyrightable,27 the White House commitment regarding watermarking may ultimately suggest ownership rests with the AI companies.

Even small suggestions like this could have a significant impact because the relationship between AI and copyright law is already live in the courts.28 Painting in broad strokes, copyright concerns about AI fall into one of two buckets: (1) whether AI companies infringe on existing copyrights by training their models on copyrighted material; or (2) whether the content generated by AI models is copyrightable and, if so, to whom those copyrights belong.29 The latter of these, the “output” question, is familiar. The Supreme Court took up a similar question in 1884 when it decided in Burrow-Giles Lithographic Co. v. Sarony30 that photographs were constitutionally eligible for copyright.31 At the time, people questioned whether photographs produced by photographers were similar to “writings” produced by “authors,” protectable under the Intellectual Property Clause.32 Ultimately, the Court took an expansive reading of both terms and determined that “[t]he only reason why photographs were not included in the [Copyright A]ct of 1802 is, probably, that they did not exist.”33 The parallels between photography and artificial intelligence are limited, but, at its core, the Court’s reasoning in Burrow-Giles suggests that copyright law is meant to adapt to new technologies and that the scope of copyright law can be expanded. The question today is how that should happen with respect to AI.

Policywise, a straightforward copyright scheme that incentivizes the proper identification and moderation of AI-generated content could solve real issues for AI regulation. Currently, concerns about unidentified AI-generated images, which can be “deepfakes” or otherwise manipulated, are particularly acute.34 At present, it’s not clear who owns AI-generated content,35 who is liable for AI-generated content,36 and what content exists in the public domain.37 With respect to something like a deepfake, then, it’s not clear who can be held responsible if, for example, someone’s likeness is used without their consent. Other complicated questions, like what happens if a model generates identical outputs for different users invoking different prompts,38 also remain unanswered. Not only do these open questions leave creators and developers in the dark with respect to their own protections,39 they also contribute to systematic inefficiencies within the broader copyright regime, as the legitimate reuse of potentially copyrighted works is made difficult by challenges in tracing the provenance of AI-generated works.40

Defaulting copyrights to AI companies may be a desirable option, in large part because it’s an administrable standard for courts. There are several ideas about how and to whom the copyrights for AI-generated content might be allocated.41 AI users have made claims to copyright ownership, arguing that prompting an AI model is sufficiently meaningful creative input to satisfy the requirements for authorship defined by the Copyright Act.42 However, the Copyright Office’s Zarya of the Dawn decision highlights the problem with this approach: it effectively requires that creators litigate the extent of their creative efforts.43 Professor Lawrence Lessig has argued that such an approach is ultimately worse for AI users and creators because it only further complicates copyright law, such that “copyright itself [becomes] the right to hire a lawyer.”44 By contrast, a system that defaults copyrights to AI companies could establish an industry standard that avoids complicated line-drawing exercises about the value of the creative input in any given AI-generated work.45

Defaulting copyrights to AI companies could also incentivize the implementation of effective identification mechanisms for AI-generated works. With the economic value of copyrights on the table, AI companies would have an incentive to effectively identify the content generated on their platforms, even if they later chose to contract around the default copyright delegations.46 Front-end identification matters because future regulations may require a reliable inventory of AI-generated works — having accurate identification mechanisms in place would significantly ease the administration of future regulatory schemes. Defaulting copyrights to AI companies could also help ensure that companies remain economically invested in maintaining proper identification mechanisms, despite any potential legal risks associated with claiming user outputs. In this way, the White House commitment could serve as a foreground to a more robust and efficient copyright system for AI-generated content in the future.

However, it’s difficult to predict precisely how this incentive structure would intersect with other corporate priorities. Companies may decide that the value of copyrights doesn’t justify assuming potential liability for the range of outputs that users create on their platforms. This risk would disincentivize companies from accurately identifying AI-generated content, and if this happened, the watermarking commitment would be, as the critics suggest, toothless. But companies are already pursuing identification mechanisms pursuant to the White House commitments.47 For example, Google recently rolled out a new tool called SynthID for watermarking content generated on its photorealistic text-to-image model, Imagen.48 Google’s early movement in this space is evidence that at least one major AI player has done the calculus and determined that investing in watermarking tools is worth potentially exposing their platform to new risks. Historically, it has paid off for tech companies to engage with regulators on the front end,49 and regulators may think about paying out — with copyrights as currency — once again. Failing to capitalize on momentum generated by the White House commitments could be a missed opportunity to promote a scheme that is administrable and mutually beneficial to users and companies alike.

That said, receiving all copyrights may be too big of a “reward” for AI companies’ efforts to identify and label AI-generated content, especially when such efforts could be statutorily required without exchanging anything in return. The underlying fear is that AI companies will be unduly compensated for the creative efforts of AI users and that creators and artists will suffer as a result.50 This concern is legitimate, especially because ensuring that creators are fairly compensated is one of the functions of copyright law generally.51 However, a straightforward default system does not necessarily disincentivize or devalue creation in copyright industries. To the contrary, a transparent copyright regime for AI-generated works can contribute to a well-functioning market for the copyrights themselves by ensuring that AI-generated works continue to be identified as AI-generated when they are reused and reproduced.52 AI companies can still contract away their copyrights, and the market for copyrights may in and of itself incentivize users to choose one platform over another, but establishing a transparent system that promotes effective identification of AI-generated content is the first step in creating a market that works for creators at all.53

Copyright law has always been about incentives.54 Exchanging the responsibility of AI identification for the value of AI copyrights would fit squarely within this system. The best way to cement these incentives would be to amend U.S. copyright law to enforce the watermarking requirement with the accompanying default of copyright ownership for AI companies. But by securing the initial commitment of AI companies to develop identification mechanisms, like watermarking, the White House may have tilted the debate regarding AI-copyright ownership toward AI companies and opened the door for this mutually beneficial trade to materialize.

Footnotes
  1. ^ Press Release, The White House, FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai [https://perma.cc/Q8QS-3AGS]. The seven companies were Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Id.

    Return to citation ^
  2. ^ Id.

    Return to citation ^
  3. ^ See id. (describing the purpose of the identification commitment as “Earning the Public’s Trust”).

    Return to citation ^
  4. ^ Id.

    Return to citation ^
  5. ^ Id.

    Return to citation ^
  6. ^ See, e.g., Tom Wheeler, The Three Challenges of AI Regulation, Brookings Inst. (June 15, 2023), https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation [https://perma.cc/23CV-ZJQM] (“The drum beat of artificial intelligence corporate chieftains calling for government regulation of their activities is mounting . . . .”).

    Return to citation ^
  7. ^ See Press Release, The White House, supra note 1.

    Return to citation ^
  8. ^ Blueprint for an AI Bill of Rights, White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights [https://perma.cc/CC8B-436Z].

    Return to citation ^
  9. ^ Exec. Order No. 14,091, 88 Fed. Reg. 10825 (Feb. 16, 2023).

    Return to citation ^
  10. ^ See Press Release, The White House, supra note 1.

    Return to citation ^
  11. ^ See Julia Zorthian, OpenAI CEO Sam Altman Asks Congress to Regulate AI, TIME (May 16, 2023, 6:43 PM), https://time.com/6280372/sam-altman-chatgpt-regulate-ai [https://perma.cc/S6EP-4FLE] (“Congress failed to meet the moment on social media. . . . Now we have the obligation to do it on AI before the threats and the risks become real.” (quoting Senator Richard Blumenthal, Democrat of Connecticut)).

    Return to citation ^
  12. ^ Wheeler, supra note 6 (“The challenge [is] how to protect the public interest in a race that promises to be the fastest ever run yet is happening without a referee.”).

    Return to citation ^
  13. ^ See Faiza Patel & Ivey Dyson, The Perils and Promise of AI Regulation, Just Sec. (July 26, 2023), https://www.justsecurity.org/87344/the-perils-and-promise-of-ai-regulation [https://perma.cc/DQ3L-DYLY].

    Return to citation ^
  14. ^ See, e.g., Press Release, Caitriona Fitzgerald, Deputy Dir., Elec. Priv. Info. Ctr., White House Announces New, Voluntary Commitments from Leading AI Companies to Manage AI Risks (July 24, 2023), https://epic.org/white-house-announces-new-voluntary-commitments-from-leading-a-i-companies-to-manage-a-i-risks [https://perma.cc/2GZ6-XCGP] ( “[V]oluntary commitments are not enough when it comes to Big Tech.”).

    Return to citation ^
  15. ^ See Kevin Roose, How Do the White House’s A.I. Commitments Stack Up?, N.Y. Times (July 22, 2023), https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html [https://perma.cc/UAL6-AWLG].

    Return to citation ^
  16. ^ Id.

    Return to citation ^
  17. ^ See id.; see also Stuart A. Thompson & Tiffany Hsu, How Easy Is It to Fool A.I.-Detection Tools?, N.Y. Times (June 28, 2023), https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html [https://perma.cc/99MQ-H7SK] (“In general I don’t think [AI detection technologies are] great, and I’m not optimistic that they will be . . . .” (quoting Professor Chenhao Tan, Director of the University of Chicago Human+AI research lab)).

    Return to citation ^
  18. ^ Roose, supra note 15.

    Return to citation ^
  19. ^ Press Release, The White House, supra note 1.

    Return to citation ^
  20. ^ See, e.g., Thompson & Hsu, supra note 17.

    Return to citation ^
  21. ^ See Brian Leubitz, Note, Digital Millennium? Technological Protections for Copyright on the Internet, 11 Tex. Intell. Prop. L.J. 417, 435–40 (2003) (proposing digital watermarking technology as a scheme by which to tag ownership for copyright holders).

    Return to citation ^
  22. ^ See 17 U.S.C. § 1202(c); Elga A. Goodman et al., 50B New Jersey Practice, Business Law Deskbook § 36:47 (2022–2023 ed.) (cataloging how courts have held that digitally embedded watermarks meet the definition of copyright management information in § 1202(c)).

    Return to citation ^
  23. ^ Pub. L. No. 105-304, 112 Stat. 2860 (1998) (codified as amended in scattered sections of 17 and 28 U.S.C.).

    Return to citation ^
  24. ^ 17 U.S.C. § 1202(b).

    Return to citation ^
  25. ^ See Gil Appel et al., Generative AI Has an Intellectual Property Problem, Harv. Bus. Rev. (Apr. 7, 2023), https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem [https://perma.cc/S5D9-XDDV].

    Return to citation ^
  26. ^ See 17 U.S.C. § 1202(b) (requiring consent of the copyright owner before a watermark is removed).

    Return to citation ^
  27. ^ This is a significant assumption, and courts are only just beginning to address this question. Most recently, in Thaler v. Perlmutter, No. 22-1564, 2023 WL 5333236 (D.D.C. Aug. 18, 2023), the District Court for the District of Columbia rejected the copyright application for a work that was described as “autonomously created by a computer algorithm running on a machine.” Id. at *1. Critically, though, while the court rejected the plaintiff’s claim in this case, it acknowledged that AI is pushing the boundaries of copyright law and prompting questions about “how copyright might best be used to incentivize creative works involving AI.” Id. at *6. The court explicitly qualified its decision by noting that the Thaler case is “not nearly so complex” as to present the kind of “challenging questions” otherwise prompted by the changing relationship between AI and copyright law and explained that the plaintiff’s attempts to assert new facts beyond the administrative record were unavailing under the Administrative Procedure Act. Id. The fact that the court concluded this way, opining on the particular plaintiff’s procedural shortcomings, suggests that the possibility of copyrighting AI-generated content is still very much a live and unsettled issue.

    Return to citation ^
  28. ^ See Appel et al., supra note 25; see also Tiana Loving, Current AI Copyright Cases — Part 1: The Unauthorized Use of Copyrighted Material as Training Data, Copyright All. (Mar. 30, 2023), https://copyrightalliance.org/current-ai-copyright-cases-part-1 [https://perma.cc/2UJ5-KRNK]; Tiana Loving, Current AI Copyright Cases — Part 2: Cases/Disputes Involving AI Copyright Authorship, Copyright All. (Apr. 6, 2023), https://copyrightalliance.org/current-ai-copyright-cases-part-2 [https://perma.cc/Z24C-8SEV].

    Return to citation ^
  29. ^ Christopher T. Zirpoli, Cong. Rsch. Serv., LSB10922, Generative Artificial Intelligence and Copyright Law 3 (2023), https://crsreports.congress.gov/product/pdf/LSB/LSB10922 [https://perma.cc/5MJD-XALF].

    Return to citation ^
  30. ^ 111 U.S. 53 (1884).

    Return to citation ^
  31. ^ Id. at 58.

    Return to citation ^
  32. ^ Id. at 56 (citing U.S. Const. art. 1, § 8, cl. 8).

    Return to citation ^
  33. ^ Id. at 58.

    Return to citation ^
  34. ^ See Reuters, Deepfakes Are Biggest AI Concern, Says Microsoft President, The Guardian (Oct. 28, 2023, 7:15 AM), https://www.theguardian.com/technology/2023/may/25/deepfakes-ai-concern-microsoft-brad-smith [https://perma.cc/QX2F-3JJK]; Jeffrey Gottfried, About Three-Quarters of Americans Favor Steps to Restrict Altered Videos and Images, Pew Rsch. Ctr. (June 14, 2019), https://www.pewresearch.org/short-reads/2019/06/14/about-three-quarters-of-americans-favor-steps-to-restrict-altered-videos-and-images [https://perma.cc/Y8BF-3S7A].

    Return to citation ^
  35. ^ See, e.g., Joe McKendrick, Who Ultimately Owns Content Generated by ChatGPT and Other AI Platforms?, Forbes (Dec. 21, 2022, 12:59 PM), https://www.forbes.com/sites/joemckendrick/2022/12/21/who-ultimately-owns-content-generated-by-chatgpt-and-other-ai-platforms [https://perma.cc/W9BA-TVG6].

    Return to citation ^
  36. ^ See, e.g., Kristin Rheins, The Debate over Liability for AI-Generated Content, Progressive Pol’y Inst. (Aug. 8, 2023), https://www.progressivepolicy.org/blogs/the-debate-over-liability-for-ai-generated-content [https://perma.cc/X4WU-DU54].

    Return to citation ^
  37. ^ See, e.g., Brent Moran & Brigitte Vézina, Artificial Intelligence and Creativity: Why We’re Against Copyright Protection for AI-Generated Output, Creative Commons (Aug. 10, 2020), https://creativecommons.org/2020/08/10/no-copyright-protection-for-ai-generated-output [https://perma.cc/2EPE-4HCS] (reporting that almost seventy percent of respondents to an “admittedly unscientific Twitter poll” believed that AI-generated content belongs in the public domain).

    Return to citation ^
  38. ^ E.g., McKendrick, supra note 35. It’s worth noting that defaulting copyrights to AI companies could also provide a solution to this problem, at least as an initial matter. If the copyrights for all AI-generated outputs belong to the companies, rather than to individual users, it eliminates the possibility of one user suing another on a theory that the later-created output infringes on the earlier-created output.

    Return to citation ^
  39. ^ See Copyright Office Holds Listening Session on Copyright Issues in AI-Generated Visual Works, Authors All., https://www.authorsalliance.org/2023/05/04/copyright-office-holds-listening-session-on-copyright-issues-in-ai-generated-visual-works [https://perma.cc/L9JN-YX6M].

    Return to citation ^
  40. ^ See generally John Mark Ockerbloom, Copyright and Provenance: Some Practical Problems, Bull. Tech. Comm. on Data Eng’g, Dec. 2007, at 51 (explaining inefficiencies in provenance and suggesting improved copyright clearance protocols).

    Return to citation ^
  41. ^ See generally Victor M. Palace, Note, What if Artificial Intelligence Wrote This? Artificial Intelligence and Copyright Law, 71 Fla. L. Rev. 217, 231–41 (2019) (identifying AI and AI users, programmers, and companies as potential copyright owners).

    Return to citation ^
  42. ^ Letter from Robert J. Kasunic, Assoc. Reg. of Copyrights & Dir. of the Off. of Pol’y & Prac., U.S. Copyright Off., to Van Lindberg, Taylor English Duma LLP 2 (Feb. 21, 2023), https://www.copyright.gov/docs/zarya-of-the-dawn.pdf [https://perma.cc/RB83-W4FM] (Zarya of the Dawn copyright registration decision, Registration #VAu001480196).

    Return to citation ^
  43. ^ See id. at 3 (citing Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345 (1991)).

    Return to citation ^
  44. ^ Lawrence Lessig, For AI Copyright (for AI Artists), Medium (May 30, 2023), https://lessig.medium.com/for-ai-copyright-for-ai-artists-ca6221932811 [https://perma.cc/CX9B-MKUE].

    Return to citation ^
  45. ^ See id.

    Return to citation ^
  46. ^ OpenAI, for example, initially established in a past iteration of the terms of service for its image generation program DALL·E that “OpenAI will not assert copyright over Content generated by the API for you or your end users.” Jessica Rizzo, Who Will Own the Art of the Future?, WIRED (July 27, 2022, 11:44 AM), https://www.wired.com/story/openai-dalle-copyright-intellectual-property-art [https://perma.cc/N4AY-TM2E].

    Return to citation ^
  47. ^ See, e.g., Melissa Heikkilä, Google DeepMind Has Launched a Watermarking Tool for AI-Generated Images, MIT Tech. Rev. (Aug. 29, 2023), https://www.technologyreview.com/2023/08/29/1078620/google-deepmind-has-launched-a-watermarking-tool-for-ai-generated-images [https://perma.cc/463J-KFVV].

    Return to citation ^
  48. ^ Sven Gowal & Pushmeet Kohli, Identifying AI-Generated Images with SynthID, Google DeepMind (Aug. 29, 2023), https://www.deepmind.com/blog/identifying-ai-generated-images-with-synthid [https://perma.cc/3T86-VAZX]. In a blog post describing the tool, the company emphasized the importance of “upholding trust between creators and users across society” and linked to the White House press release. Id.

    Return to citation ^
  49. ^ Dan Hays, Shaping the Future of Tech Industry Regulation, PwC, https://www.pwc.com/us/en/industries/tmt/library/future-of-tech-regulation.html [https://perma.cc/Q86E-R9QG] (explaining the benefits for tech companies of proactive engagement with regulators).

    Return to citation ^
  50. ^ See, e.g., Aron Brand, Is A.I. the Death of Art? Or the Future of Creativity?, Medium: MLearning.ai (Sept. 3, 2022), https://medium.com/mlearning-ai/is-a-i-the-death-of-art-or-the-future-of-creativity-78ed410673d3 [https://perma.cc/MJ94-B8K8].

    Return to citation ^
  51. ^ See Marcel Boyer, Efficiency Considerations in Copyright Protection, 1 Rev. Econ. Rsch. on Copyright Issues, no. 2, 2004, at 11, 16 (“[T]he notion of ‘proper encouragement’ [in copyright law] must rest on a proper balance between the interests of society in fostering high quality creativity in the information and cultural industrial sector, sometimes referred to as the interests of the creators, and the interests of society in fostering the consumption and use of the goods and services produced by the information and cultural industrial sector, sometimes referred to as the interests of the public at large.”).

    Return to citation ^
  52. ^ See id. at 24 (“A well functioning market for copyrights requires that those copyrights be clearly defined, affirmed and enforced.”).

    Return to citation ^
  53. ^ See Lessig, supra note 44 (arguing that, for the benefit of artists, “the copyright system itself needs to enter the 21st century, with technologies that make identifying ownership simple”); see also Boyer, supra note 51, at 25 (“One should expect that a strong and transparent copyright framework would likewise foster cultural development and diversity as well as contributing to the social well being of all.”).

    Return to citation ^
  54. ^ See Atilla Kasap, Note, Copyright and Creative Artificial Intelligence (AI) Systems: A Twenty-First Century Approach to Authorship of AI-Generated Works in the United States, 19 Wake Forest J. Bus. & Intell. Prop. L. 337, 359 (2019).

    Return to citation ^