Protecting Creative Work from AI: Essential 2026 Copyright & Safeguarding Strategies
Safeguarding Creative Work in the Era of Artificial Intelligence: A Comprehensive Guide to Digital Protection and Legal Strategy
The proliferation of artificial intelligence has fundamentally challenged traditional frameworks for protecting creative work, creating an unprecedented landscape where artists, writers, designers, musicians, and content creators must navigate complex legal territories, technological vulnerabilities, and evolving regulatory standards. As generative AI systems become increasingly sophisticated at replicating artistic styles, synthesizing content, and generating derivative works, the stakes for creators have never been higher—necessitating comprehensive protection strategies that combine legal frameworks, technological safeguards, contractual clarity, and proactive enforcement mechanisms. Understanding both the current threats to creative intellectual property and the emerging tools for protecting it has become essential knowledge for anyone engaged in creative production, whether at individual, small business, or enterprise scales.
The challenge is multifaceted: creators must simultaneously navigate a legal landscape where copyright protection for AI-generated works remains unclear and contested, implement technical solutions that prevent unauthorized use of their work for AI training, establish contracts that clarify ownership and rights when AI tools are involved in creative processes, and respond to evolving regulatory requirements across different jurisdictions that increasingly mandate transparency about AI-generated content. The Copyright Office's January 2025 report affirmed that works created entirely by AI cannot be copyrighted because copyright law requires human authorship—a principle that, while protecting human creators from AI competing for IP protection, also creates ambiguity when humans and AI collaborate in creative work. Simultaneously, AI companies have scraped billions of copyrighted works from the internet without permission or compensation to train their models, generating legal disputes that remain largely unresolved.
Understanding Copyright in the AI Era: Legal Foundations and Current Ambiguities
The Human Authorship Requirement and Its Implications
The cornerstone of copyright protection in the contemporary AI landscape remains the human authorship requirement—the principle that copyright law protects only works created by human beings, not machines. The U.S. Copyright Office has consistently and repeatedly rejected applications for copyright protection on works created entirely by AI, most prominently in its rejection of Stephen Thaler's AI-generated artwork in 2023, upholding the fundamental principle that legal authorship and copyright eligibility require demonstrable human creative input. This stance reflects a philosophical position: copyright exists to incentivize and reward human creativity; machines, lacking intentionality and agency, cannot serve as copyright authors regardless of how impressive their outputs appear.
However, the Copyright Office's 2025 Report introduced crucial nuance to this principle by establishing that hybrid works combining human and AI contributions may receive copyright protection for the human-created elements, contingent on those contributions being substantial, independently copyrightable, and demonstrably distinct from AI-generated portions. The critical distinction involves the concept of "sufficient human control over the expressive elements" of works—a high bar that simple prompts alone cannot meet, regardless of their sophistication. As the Copyright Office clarified, merely inputting a prompt into an AI system and allowing it to generate content does not constitute sufficient creative control for copyright eligibility; instead, copyright protection requires that humans make substantive creative decisions about the work's form, meaning, or expression.
The implications for creators are significant but nuanced. A designer who uses AI image generation tools to brainstorm initial concepts, then substantially modifies, combines, and creatively arranges those outputs into a cohesive work may secure copyright protection for their contributions, though not for the unmodified AI-generated portions. A writer who prompts an AI system to generate draft content, then rewrites entire sections, restructures arguments, and substantially modifies language to reflect their unique voice may claim copyright in the finished product. Conversely, a person who inputs a detailed prompt into an AI art generator and publishes the unmodified output cannot copyright that work, because the copyright office will recognize that the AI system, not the human, made the expressive creative choices.
The Derivative Works Problem and Training Data Issues
A parallel crisis for creators centers on the unauthorized use of copyrighted works for AI training without compensation or permission. Generative AI models are trained on vast datasets scraped from the internet containing billions of copyrighted works—photographs, paintings, written text, music, software code—used without the permission of copyright holders or artists. OpenAI, the company behind ChatGPT and DALL-E, has acknowledged that training cutting-edge AI models on uncopyrighted material is essentially impossible given copyright's near-universal coverage of human-created expression. Rather than obtaining permissions, the company instead offered an opt-out process: creators could technically prevent their works from being used in future DALL-E training, but this occurs after the models incorporating their work already exist and generate revenue.
This opt-out approach exemplifies a fundamental imbalance in creator protections. The burden falls entirely on copyright holders to identify that their work has been scraped, locate the mechanism for opting out, and take affirmative action to prevent future inclusion—an administratively impossible task for individual artists, particularly when content is redistributed widely across platforms. Once a model is trained on copyrighted works, those works cannot be removed from the training data; retraining requires substantial computational and financial resources that companies have shown little willingness to undertake. Moreover, opt-out schemes rely on metadata that can be easily stripped or bypassed by actors willing to ignore legal norms. The European Union's AI Act and proposed UK reforms have adopted opt-out approaches, but critics argue these represent empty compromises favoring tech companies over creators.
The derivative works problem is further complicated by current legal ambiguity around whether training AI models on copyrighted works constitutes copyright infringement. Some legal scholars and courts have suggested that the "fair use" doctrine might protect this training activity as transformative use that doesn't compete economically with original works. Others argue that wholesale reproduction of copyrighted material for model training, regardless of how the resulting model transforms outputs, exceeds fair use boundaries. Multiple lawsuits remain pending in U.S. courts seeking to establish whether AI companies committed copyright infringement through their training practices. Meanwhile, creators whose work appears in training datasets have limited recourse until courts clarify the legal landscape—which may take years given the complexity and novelty of these issues.
Geographic Variations and Emerging Precedent
The legal landscape varies substantially across jurisdictions, with implications for international creators. In China, courts have found that copyright protection can apply to AI-generated images if humans demonstrate creative effort and exert sufficient control over the generation process. The Beijing Internet Court's September 2025 ruling established that while copyright can potentially protect AI-generated images, creators must document their creative process, retain records of AI prompts and modifications, and demonstrate that their contributions constitute genuine originality rather than simple output selection. This evidence requirement is particularly challenging because it assumes creators maintain detailed generation logs—a practice many AI platforms and users do not follow.
The disparity between jurisdictions creates strategic complexity for international creators. A work that qualifies for copyright protection in one jurisdiction might lack protection in another; conversely, a work created with AI assistance that would not qualify for copyright protection in the United States might receive protection in China or other jurisdictions adopting different standards. This geographic arbitrage creates opportunities for sophisticated players to navigate jurisdictions strategically while disadvantaging individual creators who cannot afford multi-jurisdictional legal strategy.
Technical Protections: Watermarking, Metadata, and Content Authenticity
Digital Watermarking: Current Capabilities and Limitations
Digital watermarking has emerged as the primary technical tool for asserting ownership of creative works and combating unauthorized AI training and AI-generated deepfakes. Invisible watermarks embed cryptographic patterns, identifiers, or metadata directly into digital content—images, audio, video, text—that remain detectible even after the content undergoes typical transformations like compression, cropping, or minor editing. OpenAI's watermarking system for DALL-E 3 images achieves approximately 98% detection accuracy even after standard image edits, providing technically viable proof of AI generation and source attribution.
Google's SynthID system represents a sophisticated evolution of watermarking, embedding invisible identifiers into both AI-generated images and text without perceptibly altering the content's appearance or quality. These embedded watermarks survive minor manipulations and processing but create limited overhead in file size or computational requirements. The advantage of invisible watermarking over visible watermarks is substantial for creators: visible watermarks like logos protect against casual misuse but significantly reduce commercial viability by detracting from aesthetic quality and often appearing unprofessional. Invisible watermarks achieve protection without compromising content value.
However, watermarking technology faces critical vulnerabilities that creators must understand. Adversarial attacks and sophisticated editing tools can remove or degrade watermarks without noticeably degrading visible content quality. Diffusion-based image editors, for instance, can intelligently regenerate portions of watermarked images, stripping away embedded markers while preserving semantic content and aesthetic appeal. Machine learning researchers are exploring adversarial watermarking approaches where embedded marks are designed to survive attempted removal, but this remains an active area of research where detection and evasion capabilities race each other. Additionally, watermarks prove less effective for text-based content, where the combination of transformative AI models, paraphrasing, and minor rewording can obscure original sources entirely without requiring explicit watermark removal.
The EU AI Act, effective March 2025, mandates watermarking or other machine-readable markings for all AI-generated content distributed within the EU. China's Cyberspace Administration has proposed similarly stringent requirements for platform-enforced watermarking with both visible and hidden components. These regulatory mandates represent a significant development: they shift watermarking from optional protection mechanism to compliance requirement, creating incentives for widespread adoption and standardization. However, enforcement of watermarking mandates depends on detecting violations and holding platforms accountable—capabilities that regulatory agencies are still developing.
Metadata Architecture and Provenance Tracking
Metadata—structured information about content creation, ownership, rights, and usage—provides an infrastructure layer for creator protection that extends beyond individual pieces of content to enable systemic rights management. Traditional metadata in creative works includes standard elements: creator name, creation date, copyright statement, and contact information. This foundational metadata creates persistent connections between content and ownership that survive file redistribution and persist across platforms.
The Creation Rights protocol exemplifies an emerging approach to dynamic metadata architecture designed specifically for the AI era. Rather than treating metadata as static information recorded once at creation, Creation Rights embeds active, machine-readable, and continuously updatable licensing and attribution logic directly into file structures. This enables AI systems, content platforms, and digital marketplaces to query real-time licensing status before ingesting, training on, or publishing creative works. If a creator updates licensing terms, revokes permissions, or reports unauthorized use, that information propagates through the metadata layer, enabling platform systems to respond automatically without requiring manual intervention.
The architecture combines multiple protective layers: an identity layer cryptographically linking content to verified creator information and legal entities; a legal layer encoding licensing terms, usage restrictions, and remix allowances; an attribution layer specifying how credit must be provided; and an enforcement layer that enables automated takedowns, DMCA notices, and dispute resolution. Most significantly, Creation Rights infrastructure enables AI companies to source legally cleared training data with verified lineage, preventing inadvertent copyright violations and establishing clear provenance trails for model training. Developers building applications on top of Creation Rights can implement compliance-aware systems that automatically respect embedded licensing constraints.
However, metadata-based protections face fundamental vulnerabilities. Metadata can be easily stripped from digital files during copying, reprocessing, or redistribution—a problem particularly acute given that most users lack awareness of metadata and do not intentionally preserve it. Text-based content remains particularly vulnerable since metadata cannot be meaningfully embedded in plain text files. Even with encryption and cryptographic verification, determined bad actors can bypass metadata constraints for personal use, though this prevents scale and commercial exploitation.
Blockchain-Based Copyright Registration and Enforcement
Blockchain technology offers a decentralized, immutable infrastructure for registering creative works and tracking ownership over time, complementing traditional copyright registration with transparent, cryptographically secure records. An immutable database on blockchain provides tamper-proof evidence of creation dates, ownership claims, and licensing arrangements that survives platform failures, data center outages, or disputes between parties. Smart contracts can automate royalty distribution whenever licensed content is used or commercially exploited, eliminating intermediaries and ensuring direct creator compensation.
The benefits include decentralized control where creators maintain ownership records independent of centralized platforms; transparent copyright transfer systems enabling verifiable chains of ownership across multiple transactions; and automated anti-piracy solutions where blockchain records enable rapid detection of unauthorized copies and can trigger automatic enforcement actions. For creators building historical archives or concerned about long-term sustainability, blockchain registration provides protection against data loss, platform obsolescence, or temporal gaps in documentation.
The practical adoption of blockchain copyright protection remains limited, however, due to several factors: lack of mainstream awareness and adoption, uncertainty about legal recognition of blockchain-registered rights in courts, environmental costs of energy-intensive blockchain networks, and the reality that most piracy and unauthorized use occurs through straightforward copying rather than sophisticated blockchain exploitation. While blockchain provides technical infrastructure for immutable records, it does not prevent copying, does not automatically enforce copyrights, and depends on legal systems recognizing blockchain records as valid evidence—a status that remains contested in many jurisdictions.
Contractual Safeguards: Clarifying Rights When AI Is Involved
AI Development and Employment Agreements
Organizations and individuals engaging AI developers, contractors, or employees to create or deploy AI systems must establish explicit contractual clarity around intellectual property ownership, as default copyright law provides insufficient guidance when AI systems produce outputs. Employment agreements should include IP assignment clauses explicitly stating that any works, software, or other intellectual property created by employees through their employment—including those created with AI assistance—automatically vest in the employer upon creation. For contractors and external developers, separate independent contractor agreements should specify that any AI-generated outputs or improvements to AI systems result in IP ownership vesting with the hiring party, with clear carve-outs preserving contractor ownership of pre-existing IP and allowing contractors to retain ownership of general methodologies not specific to the engagement.
The risk of ambiguity is substantial. If an employee creates marketing content with AI assistance or an AI system generates code that becomes part of company products, disputes about ownership could jeopardize commercialization or create liability if competing claims emerge. Courts are reluctant to guess parties' intentions, so clear contractual language explicitly addressing AI-generated outputs is essential—not as aspirational protection but as pragmatic necessity.
Licensing Agreements for AI Platforms and Services
When organizations or creators use third-party AI platforms to generate content, the platform's terms of service and licensing agreements determine whether users own resulting outputs, whether the platform retains rights to use outputs for future model training, and what restrictions apply to commercial use. Many AI platforms make ambiguous or contradictory promises: they may claim that users "own" outputs while simultaneously reserving rights to use outputs and user inputs for model improvement.
OpenAI's terms of service exemplify this ambiguity. While users may use DALL-E outputs commercially, OpenAI retains the right to use prompts and outputs for research, development, and improvement of its services. This means that while a user has no exclusive rights to prevent others from generating similar images using the same prompts, OpenAI has retained rights to analyze the user's prompts and potentially feed that information back into model training. For creators concerned about style mimicry or competitors generating similar work, this creates substantial competitive risk.
Forward-thinking agreements should explicitly confirm that:
The customer/user owns resulting AI outputs and any resulting IP rights (or receives an exclusive, perpetual license if the vendor retains underlying rights)
The vendor is prohibited from using customer outputs or customer-provided data for training or improving models
The vendor passes through any upstream licensing restrictions from third-party models incorporated into the platform, ensuring customers understand residual obligations and restrictions
The agreement clearly distinguishes between "Provider AI" (vendor-owned systems), "Third-Party AI" (externally licensed models), and "Customer AI" (customer-provided systems) with distinct responsibility allocations
For creators using AI tools, careful review of terms of service is essential before publishing or commercially exploiting outputs. Many creators have discovered too late that platforms they used for content generation retain broad rights to commercialize their work or use it for training, fundamentally undermining the creator's competitive position.
Contractual Language for Attribution and Disclosure
Contracts involving AI-assisted creation should mandate clear disclosure of AI involvement and specify attribution requirements, particularly as regulations increasingly require labeling AI-generated content. Employment and contractor agreements should specify whether AI-assisted work must be labeled as such and whether the company assumes liability for any copyright infringement claims arising from AI-generated content.
Licensing agreements should clarify attribution requirements: whether the AI platform must be credited, whether users must label content as AI-generated, and what happens if downstream users modify or redistribute AI-generated content without maintaining attribution. As regulatory requirements for AI labeling intensify—particularly following the EU AI Act's March 2025 implementation—contractual agreement about who bears responsibility for labeling compliance becomes essential.
Right of Publicity: Protecting Identity Against Deepfakes and Synthetic Media
Legal Framework and Deepfake Protections
The right of publicity—an individual's exclusive right to control commercial use of their name, image, likeness, and persona—provides legal recourse when deepfakes are created or used without consent for commercial exploitation. Landmark cases establish that individuals can prevent commercial use of their identity even in transformative contexts: Midler v. Ford held that sound-alike singers cannot commercially mimic a well-known artist's voice; Carson v. Here's Johnny Portable Toilets established that using a famous person's catch phrase commercially violates publicity rights.
Deepfakes created to exploit a person's identity for commercial purposes—such as using AI to generate video of a celebrity endorsing products they never endorsed—typically constitute violations of publicity rights and create grounds for legal action demanding injunctive relief (stopping the deepfake's distribution) and damages. However, deepfakes created for expressive purposes—parody, satire, artistic commentary, or criticism—receive protection under the First Amendment even when they use someone's likeness without consent, creating a complexity where freedom of expression protections sometimes shield deepfake creators from publicity liability.
The U.S. Copyright Office's July 2024 report on digital replicas concluded that existing publicity, privacy, and trademark law are insufficient to address the full range of deepfake harms and called for new federal legislation specifically addressing deepfake creation, use, and distribution. Currently, protections vary significantly by state—some states have enacted legislation specifically criminalizing deepfakes used for non-consensual intimate imagery, while others lack specialized protections. The inconsistency creates gaps where a deepfake that is illegal in one state may be unregulated in another, and federal protections for individuals whose identities are exploited nationally remain lacking.
Proactive Protection Strategies
Creators and public figures can strengthen protections against unauthorized deepfakes through several proactive approaches:
Contractual protection through talent agreements, performance contracts, and licensing arrangements can specify restrictions on AI use of a performer's image and voice, establish damages for unauthorized deepfakes, and require disclosure of whether content is synthetic. These contracts typically include explicit prohibitions on deepfakes, AI training on the performer's likeness, and requirements that derivative or transformed uses receive prior written consent.
Trademark registration of distinctive personal characteristics—logos associated with public figures, signature catchphrases, distinctive visual presentations—can provide some protection against commercial deepfake exploitation through false endorsement claims under the Lanham Act, which prohibits deceptive uses of marks that might confuse consumers.
Legal action and enforcement through right of publicity claims, false endorsement claims, or defamation suits provides remedies after deepfakes are created, though litigation is expensive, slow, and often follows substantial damage to reputation or commercial interests. As an emerging area, courts are still developing frameworks for evaluating deepfake cases, creating litigation risk and uncertainty.
Biometric and identity-based metadata systems like those proposed in the Creation Rights protocol can register individuals' NIL (Name, Image, Likeness) information and enable platforms to automatically detect unauthorized use of registered identities, triggering alerts and enforcement mechanisms. As these systems mature, they may provide more scalable protection than relying on individual litigation.
Documenting Creative Process and Establishing Ownership
Recording Evidence of Human Creativity
Given that copyright eligibility requires demonstrating human creative input, creators must maintain detailed documentation of their creative process, particularly when using AI tools. This documentation should include:
Creation records and process documentation: Timestamps showing when content was created, records of multiple iterations and revisions, evidence demonstrating creative choices made during development, documentation of how AI outputs were selected, modified, or combined. For visual creators, maintaining multiple draft versions showing evolution from initial concept through final output provides evidence of substantial creative contribution.
Input documentation: For creators using AI tools, preserving records of prompts, parameters, and inputs provided to AI systems demonstrates the creative direction exercised over AI outputs. If a visual artist experiments with dozens of prompts, carefully curates the results, and substantially modifies selected outputs, this documentation proves creative control—the opposite of simply running one prompt and publishing the result unmodified.
Modification and refinement records: Evidence showing where human creators edited, refined, combined multiple AI outputs, or applied artistic judgment after AI generation demonstrates that expressive creative choices originated with humans rather than machines. Before-and-after comparisons showing how unmodified AI output differs from the finished work effectively establish substantive human contribution.
The Beijing Internet Court's ruling demonstrates the importance of this documentation: the court rejected copyright claims when a creator could not produce actual generation process records, only post-hoc descriptions of what they claimed to have created. This suggests that creators relying on oral descriptions or memories of their creative process will struggle to establish copyright eligibility in disputed claims.
Copyright Registration and Legal Infrastructure
In the United States, registering copyright with the U.S. Copyright Office creates a public record establishing ownership at a specific date, provides eligibility for statutory damages and attorney fees in successful infringement suits, and creates a presumption of copyright validity that shifts burden of proof to defendants. While copyright technically exists upon creation without registration, registration provides crucial legal advantages that become essential in enforcement contexts.
Creators should register works before publication or shortly thereafter, and should prepare clear statements describing their human contributions when registering AI-assisted works. The Copyright Office has indicated that applications for hybrid works will be evaluated case-by-case, making detailed descriptions of human involvement essential for approval.
Responding to Infringement and Enforcing Rights
Identification and Documentation of Unauthorized Use
Creators discovering unauthorized use of their work must first clearly document the infringement, preserving evidence of when and where unauthorized use occurred, how their work was used, whether compensation was involved, and the extent of distribution. Screenshots, downloads, archive.org captures, and dated records create contemporaneous evidence superior to later recollections. For AI-related infringements specifically, documenting how AI systems are using or generating content similar to original work strengthens claims.
Cease-and-Desist Notices and Takedown Requests
Upon identifying infringement, creators typically send cease-and-desist letters demanding that infringing parties stop unauthorized use, provide accounting of past damages, and remove infringing content from distribution channels. These letters formally establish awareness of infringement, demonstrate that the creator takes their rights seriously, and often resolve disputes without litigation when recipients recognize legal jeopardy.
For online content, DMCA takedown notices (invoking the Digital Millennium Copyright Act) require platforms to remove infringing content upon notice, without requiring the creator to prove infringement in court—though the process is often slow and platforms sometimes ignore valid notices. The Creation Rights protocol includes automated takedown and dispute systems enabling creators to generate formal DMCA notices programmatically and escalate persistent violations to administrative or legal proceedings.
Litigation and Damages
When negotiation or administrative process fails, creators may pursue copyright infringement litigation, seeking injunctive relief (court orders stopping infringement) and monetary damages. Damages can include: actual damages (profits lost by the creator due to infringement); infringer's profits (revenue the infringer earned through unauthorized use); or statutory damages ($750 to $30,000 per infringement, or up to $150,000 for willful infringement) in registered works. Litigation is expensive and slow, typically requiring 18-36 months before resolution, making it practical primarily for creators with substantial damages or precedent-setting circumstances.
Building a Comprehensive Protection Strategy
Layered Approach to Creator Security
Effective creator protection requires layered approaches combining legal rights, technical safeguards, contractual clarity, and proactive enforcement. No single mechanism provides complete protection, but combined strategically they create multiple barriers to unauthorized use:
Legal protections establish the foundational right to exclude others from unauthorized use and provide remedies when infringement occurs. Copyright registration, publication of clear ownership statements, and maintenance of documentation proving originality all strengthen legal position.
Technical safeguards including watermarking, metadata embedding, and potentially blockchain registration create detection mechanisms that identify unauthorized use and deter casual infringement. While sophisticated bad actors can circumvent technical protections, they raise the cost and complexity of unauthorized use sufficiently to discourage opportunistic infringement.
Contractual mechanisms clarify ownership when multiple parties contribute to creative work, establish responsibilities for AI platform use, and specify remedies for breach. Well-drafted contracts eliminate ambiguity that parties might exploit in disputes.
Regulatory compliance with emerging mandates for AI labeling, watermarking, and transparency—particularly the EU AI Act requirements effective March 2025—positions creators advantageously in jurisdictions where non-compliance creates liability for platforms and developers.
Enforcement commitment through active monitoring, rapid response to infringement, and willingness to pursue damages signals that the creator takes their rights seriously, deterring future infringement and creating precedent for damages that increases the cost of non-compliance.
Geographic Adaptation and Multi-Jurisdiction Strategy
Creators operating internationally or concerned about global distribution must adapt protection strategies to different jurisdictions' legal standards, regulatory requirements, and enforcement capabilities. European creators benefit from the EU AI Act's watermarking mandates and stricter AI governance; U.S. creators must navigate an uncertain copyright landscape where key questions about AI training and authorship remain legally unsettled; Chinese creators face more stringent AI content regulation but clearer standards for copyright eligibility in AI-generated works.
Sophisticated creators and organizations maintain multi-jurisdictional protection strategies: registering copyrights in major markets where their work is distributed; selecting watermarking and metadata standards aligned with regulatory requirements in those markets; including contractual provisions addressing compliance with relevant jurisdictions' laws; and understanding where to pursue enforcement when infringement occurs.
Conclusion: Proactive Protection in an Uncertain Landscape
The convergence of advancing AI capabilities, unclear copyright standards, rapid regulatory evolution, and technological opportunities for protection creates unprecedented challenges and possibilities for creators. Rather than viewing this landscape as hopeless or paralyzingly complex, sophisticated creators recognize the strategic advantages of early action: documenting creative processes, registering copyrights, implementing technical safeguards, carefully negotiating contracts, and monitoring for infringement position creators to maintain control and value in their work despite AI disruption.
The legal and technical infrastructure for creator protection is actively evolving. Regulatory mandates for AI transparency and watermarking, emerging metadata protocols like Creation Rights, blockchain-based registration systems, and increasing enforcement by creators through litigation are collectively building a more creator-friendly ecosystem. Creators who understand the current landscape—its weaknesses and strengths, legal ambiguities and emerging clarity, technical limitations and opportunities—can build layered protection strategies that secure their creative work and establish sustainable paths to monetization even as AI reshapes what's possible in creative industries.
Comments (0)
No comments found