In the beginning of March, I gave a presentation on AI legal developments. One of the attendees astutely pointed out that the current legal framework seems to focus on B2C use cases. I agreed. The focus is consumer protection. About 10 days later, I spoke at an AI contracting livestreaming event. Preparing for it gave me an opportunity to reflect on what is truly different in AI contracting. Ultimately, what I found helpful is to view it in a continuum, effectively to trace the developments in technology contracting in the past 25 years, before focusing on the specific question.
In 2000, I worked on a set of transactions, including a traditional on-premise software distribution license agreement in the stand-alone field of use. A related transaction involved an exclusive license to the software in the application service provider (ASP) and service bureau fields of use. ASP consisted mainly of accessing software through a web interface. Back in the day, this was viewed as quite revolutionary. Service bureaus had been around since at least the 1970s. The term ASP never really caught on. In the mid-2000s, on-demand software services began to gain momentum. On-demand quickly morphed into software-as-a-service (SaaS) and other computing-as-a service offerings by the late 2000s. By 2010, “cloud services” had become the dominant term for accessing distributed, on-demand computing services. In time, the forms of cloud services agreements began to coalesce. Put otherwise, the forms of agreement on vendor paper began to look a lot alike. Likewise, the forms on customer paper began to share many similarities. This increased the efficiency of contracting and arguably today, negotiations focus on a handful of well-understood contractual fault lines. We have seen a similar convergence as to data processing agreements, a far cry from the world in 2018, when the General Data Protection Regulation become applicable.
For some technology transaction attorneys, the transition from on-premise to cloud services was a significant one. In 2025, we may be at another turning point: the advent of AI contracting. To be clear, by “AI contracting” I do not mean an instantiation of agentic AI, where two AI agents, one for the service provider and another for the customer, negotiate an agreement based on parameters provided by their principals, and then present a near-final draft for human review, confirmation, and execution. We are not there yet, but perhaps that future is not as far away as we think. Rather, AI contracting involves an agreement pursuant to which a vendor provides functions or features in a solution that are powered by AI and involve models that take inputs provided by the customer, or anyone on the customer’s behalf, and transform those inputs into outputs, which may vary over time. To that end, one flavor of this could be an AI SaaS agreement.
For experienced technology transaction attorneys, there may be a healthy dose of skepticism about any meaningful differences between drafting and negotiating a contract for AI products or services and a traditional SaaS agreement. That skepticism is not unreasonable. At their core, both types of agreements involve the processing of data, which of course may include personal data. With both types of agreements, I would start with the product or services offering description to understand the features of the offering. For example, the offering could be a customer relationship management SaaS solution with embedded chatbot functionality. I would probably want to dig deeper into the chatbot use cases, to understand if it was internal use only (such that, e.g., a customer service agent could query it to learn more about the account in question) or whether this functionality would be exposed to external users (including consumers), and then for what purposes. For either use case, I would expect documentation from the software developer in terms of the operation of the chatbot feature. I would want to understand likely inputs and expected outputs. I would want to understand what the chatbot had been trained on and the plan, if any, for additional training. In addition, I would want to understand where, geographically, the chatbot functionality would be deployed and used.
If the chatbot is answering typical customer support questions about the status of a non-discretionary customer service account, it is unlikely that it would comprise a “high risk artificial intelligence system,” which at least under the Colorado AI Act (SB 24-205), is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.” In turn, under that Act, a “consequential decision” is one that “has a material legal or similarly significant effect on the provision or denial to any consumer of, or the costs or terms of: education enrollment or an education opportunity, employment or an employment opportunity, a financial or lending service, an essential government service, health-care services, housing, insurance, or a legal service.”
Based on that analysis, I would not expect documentation from the vendor that would address the detailed requirements in the Colorado AI Act. For another type of AI solution, I might. This brings me to perhaps the most challenging aspect of AI contracting, namely the panoply of laws and regulations that have bearing on or that soon will have bearing on AI products and services. In California, currently 35 bills with a nexus to AI have been introduced. For example, California AB 410 would make it “unlawful for any person to use a bot to communicate or interact with another person in California” unless the person discloses that it is a bot. The current law, on the books since 2019, prohibits the use of a bot “with the intent to mislead the other person about its artificial identity for the purposes of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” Knowing which law might apply (and when), and analyzing the offering and its use(s) through that lens presents obvious challenges. Practically, then, as the deployer of the chatbot functionality, the customer will be responsible for providing the appropriate disclosure, but the vendor will need to make it apparent that this functionality does exist and in what form.
Other aspects of AI contracting are notable. For example, there may be AI domain adaptation requirements, meaning that a model trained on one data distribution might not be as good or effective for a different domain or distribution. Other issues in AI contracting involve model customization, ownership and use of customized models, ownership and licensing of system outputs, testing against real world scenarios, AI explainability obligations, and transitioning services to another vendor with AI capabilities. For a given implementation, some of these may be more salient than others, and doing that analysis up front and appropriately will be our next task in the AI contracting journey.