Check out this article published in the April 10, 2026, print edition of the Phoenix Business Journal. The article appears as part of the Journal's "Legal Pulse" column.
Artificial intelligence (AI) is embedded in the daily work of companies of all sizes, shapes, and locations. Your employees, vendors, partners, prospective acquisition partners, suppliers, and customers are all using, knowingly or unknowingly, AI as part of their job functions and core business models. Those habits collide with two issues that many traditional confidentiality agreements do not anticipate: (1) inadvertent breach of confidentiality obligations through use of public AI tools, and (2) the ownership of AI outputs. Recent confidentiality agreement templates generally do not contemplate AI. Because courts have not provided clear guidance on the implication of AI usage, businesses have little common or statutory law to rely on. Drafting thoughtful agreements is key to protecting your valuable IP assets and avoiding claims that you failed to protect the IP assets of others.
This article briefly summarizes the issues, the current legal landscape, and pragmatic moves you can make now to protect your valuable intellectual property and insulate your company from claims that you breached confidentiality obligations.
Reducing the Risk of Unintentional Disclosures
The default setting for public facing, free versions of OpenAI, Gemini, ClaudeAI, and similar platforms allows them to use your inputs to train their models. Users must take overt steps to instruct these tools to not use those inputs for training and development. The possibility that confidential information may be used to generate additional ideas, concepts, plans, strategies, or code; and that unrelated and unaffiliated third parties may view the information—even under confidentiality agreements—may constitute an unauthorized disclosure.
Enterprise platforms (e.g., Microsoft 365 Copilot with Enterprise Data Protection, Google for Gemini Cloud, Amazon Bedrock, Anthropic Enterprise, etc.) generally operate under data‑processor terms, do not use prompts or content to train foundation models, and therefore materially reduce risk of unintentional breaches that are not present in public or “free” tools. Most enterprise AI offers contractual guarantees that: (1) customer data is not used for training; and (2) data is isolated within the tenant environment.
Confidential information should not be inputted into non-enterprise or public AI systems. Employees using AI tools in their day-to-day operations might intentionally use AI to help them evaluate, organize, and use confidential information which has been shared by a third party. For example, an employee pastes a customer price list into ChatGPT to evaluate whether the pricing is consistent with market trends. Even if the prompt is later deleted, the disclosure may still violate the confidentiality agreement because the platform retains safety‑monitoring logs.
Without proper education or prompts, your employees and the employees of your business partners may not understand the difference or make a conscious choice that puts your company at risk of violating its confidentiality obligations.
Ownership of AI Output Produced by AI Tools
While there is an understandable focus on protecting the confidentiality of delivered information, confidentiality agreements must now also address conflicting ownership claims by the recipients of confidential information when that recipient uses AI tools to create summaries, analyses, reports, recommendations, or code, that are based, in whole or in part, upon another party’s confidential information.
Traditional “work product” language in vendor agreements may push AI outputs into the recipient’s ownership. This issue becomes more complicated because AI‑generated content does not reliably qualify under U.S. copyright law. Under current U.S. laws, copyrights require a human author to qualify for copyright protection. Outputs produced solely by AI may therefore not be protected as derivative works under copyright law and may not be captured by “derivative works” language in a confidentiality agreement. Because derivative work protections cannot be reliably assumed, careful lawyers should rely on contract language—not copyright law—to establish ownership. If ownership provisions are unclear, the receiving party may later assert that these AI‑generated outputs constitute their own work product, especially where their service agreements contain broad intellectual‑property ownership clauses or ambiguous “work made for hire” language.
Vendors, consultants, developers, and even prospective acquirers may input your confidential information into an AI platform to produce summaries, analyses, drafts, recommendations, or code. The AI output from such an exercise may be valuable, novel, or an extension of your business plans and trade secrets. If your confidentiality agreement is silent, the recipient might assert that these AI‑assisted outputs reflect their own independent work product or are owned under their general service‑provider terms. These outputs may not be covered by traditional “derivative works” clauses as AI outputs may not qualify as derivative works under U.S. copyright law.
Implementation Steps
To avoid inadvertent breaches and protect your company’s intellectual property, confidentiality agreements should:
- Identify clear rules for employees and contractors (no uploads of confidential information to public AI).
- Authorize only enterprise or private deployments of AI tools with contractual no‑training guarantees.
- Prohibit the use of public chatbots with confidential information provided pursuant to a confidentiality agreement and further require recipients to obtain separate written authorization before inputting confidential information into a private or enterprise AI tool. This prevents both unauthorized disclosures and future ownership disputes.
- Specify that the disclosing party owns all outputs-whether created by humans or AI-based on, derived from, or generated using the disclosing party’s confidential information.
- Include language in confidentiality agreement that all return and destruction obligations apply to AI-generated analyses, drafts, summaries, recommendations, or code.
The evolution of AI tools requires confidentiality agreements to include explicit contractual language governing ownership, permitted uses, and destruction of AI‑generated outputs. Clear drafting prevents disputes over ownership, scope of use, and the treatment of materials generated using a company’s confidential information.
If you have questions about confidentiality agreements, please contact Joshua at josh.becker@gknet.com or (602) 530-8465.
about the author
Joshua Becker is a forward-thinking adviser who anticipates how to best position his clients relative to regulatory changes, evolving market conditions, and the competitive landscape. He brings 20 years of experience in franchising and intellectual property law to clients that include companies of all sizes from a myriad of industries, among them established and fast-growth franchisors, technology development companies, service providers, and distributors.