- Core thesis: The AI authorship debate cannot be resolved by applying existing copyright frameworks because those frameworks were built on a fiction of individual creative origination that AI makes transparent — we need a new theory of what intellectual property is for
- Copyright law has always been a pragmatic bargain, not a recognition of natural rights — AI disrupts the bargain without touching the underlying philosophical foundations
- The strongest counterargument is that the distinction between human and AI creativity is real and morally relevant, and the law should preserve it
- Practical implication: tech workers and content creators should engage with IP policy not as rights holders defending territory but as participants in a system whose design has public interest implications
Section 1 — The Problem
The litigation that began in 2023 and has continued through 2026 has not produced the philosophical clarity it might have. Courts in different jurisdictions have reached inconsistent conclusions about whether AI training on copyrighted works constitutes infringement, whether AI-generated outputs qualify for copyright protection, and who owns such rights when they exist. The legislative response has been fragmentary: new disclosure requirements here, new exceptions there, but no coherent framework.
The legal confusion reflects genuine philosophical confusion about what intellectual property is for and what "authorship" actually means. These questions are not merely academic. They determine who gets paid for creative work, what incentives exist for producing it, how cultural knowledge is accumulated and transmitted, and what relationship individual human creativity bears to the collective cultural substrate from which it always draws.
AI forces these questions into the open because it makes explicit what was always true but easy to ignore: creative work does not emerge from autonomous individual genius. It emerges from extensive engagement with existing work, in cultural contexts that shape what counts as creative, in response to audiences whose expectations frame what is possible. The author was always already a conduit, not a pure originator. AI makes this undeniable.
Section 2 — The Argument
Copyright law in its current form reflects what scholars call the "Romantic author" ideology: the idea that creative works are original expressions of individual human minds, and that this origination grounds the creator's property right. This ideology has been extraordinarily effective as political rhetoric — it makes copyright feel like a natural right rather than a legal construct — but it has always been philosophically weak.
Consider what actually produces a creative work. The novelist who writes a successful book draws on the entire history of literature she has consumed, the cultural conversations she has participated in, the language that was collectively developed over centuries, the narrative structures and genre conventions that pre-exist her contribution, and the social experiences that provide her material. Her contribution is real — she synthesizes, selects, transforms, and adds. But the transformation is incremental, not originative. The romantic author who creates ex nihilo from pure individual genius does not exist and has never existed.
AI makes this structure visible by producing competent creative outputs that are transparently derivative of training data. The AI system drawing on billions of images to produce a new one is doing something structurally similar to what a human artist does — drawing on extensive exposure to existing work — but in a form that cannot be dressed up in the romantic author ideology. The question is whether the human artist's version is categorically different in morally relevant ways, or whether the difference is one of degree.
The AI authorship crisis reveals that intellectual property frameworks were built on an idealized theory of individual creative origination that has always been empirically false — and the appropriate response is not to extend or modify these frameworks but to redesign them around an honest theory of what intellectual property is for.
The utilitarian case for copyright — the dominant theory in US law, as expressed in the Constitutional clause — is that copyright incentivizes creative production by allowing creators to capture the value of their work. This is a pragmatic argument about incentive structures, not a natural rights claim. And it has always been empirically contested: most creative work throughout history has been produced in the absence of copyright protection; the empirical evidence on how copyright duration and scope affects creative output is mixed at best.
When evaluated on utilitarian grounds, AI-generated content raises the question: do existing copyright protections incentivize the production of work that would not otherwise exist? For human creators, the answer is plausibly yes — though how much protection is needed is genuinely uncertain. For AI systems, the question is different: what incentive structure optimally promotes the creation and deployment of valuable AI creative systems, and does copyright protection for AI outputs serve that goal?
Section 3 — The Strongest Counterargument
The strongest defense of the distinction between human and AI creativity — and of copyright law's historical focus on human authors — does not rely on the romantic author ideology. It is a labor-based argument with deep roots in Lockean political philosophy: human creators invest effort, skill, and time in their work, and this investment generates a moral claim to benefit from its results. The AI system that generates a book in thirty seconds has not invested labor in the morally relevant sense; its computational costs belong to its operators, not to some form of creative agency.
This argument grounds copyright in a theory of desert that most people find intuitive: you deserve the fruits of your labor, and creative work is paradigmatically your labor. AI-generated work is not the AI's labor (it has no interests that deserve protection) and is not straightforwardly the operator's labor (they did not do the creative work). The labor theory suggests that AI-generated works fall naturally into the public domain.
Furthermore, a system that grants copyright protection to AI-generated works would systematically disadvantage human creators — who produce slowly, expensively, and with limited scale — in competition with AI systems that produce rapidly, cheaply, and at scale. The resulting marketplace would drive human creators out of professional creative work, not because AI is better but because it can undercut on cost while producing outputs that are "good enough" for most commercial purposes. The loss would be real: the diversity of perspective, the depth of human experience, the originality that comes from genuine engagement with life rather than with training data.
Section 4 — Synthesis
Both positions contain important truths. The romantic author critique is correct that human creativity is not ex nihilo origination, and that copyright frameworks built on that mythology have always been philosophically unstable. But the labor theory and the competitive dynamics argument correctly identify that there is something morally and practically important about the distinction between human and AI creative work — a distinction that a purely utilitarian framework struggles to capture.
The synthesis: redesign IP frameworks around an explicit theory of creative ecosystems rather than individual authorship. The goal of IP policy should be to sustain the conditions under which diverse, high-quality creative work continues to be produced — which means protecting human creators' ability to earn from their work, ensuring that AI training does not simply extract value from human creative labor without compensation, and avoiding the monopolization of AI creative capability by large incumbents. These goals are achievable with frameworks that do not depend on the romantic author ideology but that are explicit about the interests at stake.
Section 5 — Practical Implications
For tech workers and founders building in the content and AI space, several practical orientations follow from this analysis.
Stop pretending that training data issues are settled. The legal landscape is genuinely unsettled, and the philosophical questions underlying the legal disputes are genuinely contested. Building AI creative systems on the assumption that training on all publicly available data is clearly legal and morally unproblematic is an error of both epistemic and practical risk management.
Engage with licensing frameworks seriously. The emerging ecosystem of licensed AI training data — opt-in agreements, revenue sharing arrangements, attribution systems — represents an attempt to resolve the training data compensation problem in a way that serves both AI developers and creators. Building on this infrastructure is both ethically more defensible and strategically more sustainable than relying on legal ambiguity.
Be transparent about AI contribution to your outputs. The norm of disclosure is becoming both legally required in some jurisdictions and socially expected in professional contexts. Getting ahead of this norm rather than behind it builds trust and avoids the reputational damage of being caught concealing AI generation.
Finally, participate in the policy debate with genuine intellectual engagement rather than pure advocacy for your immediate interests. The IP frameworks that will govern AI creative systems for the next generation are being shaped now, and the outcome matters enormously for both the creative industries and the AI sector. Contributing honestly to that debate, including acknowledging the genuine costs of current legal positions for human creators, is both more ethical and likely more effective than pure advocacy.
— iBuidl Research Team