Reining in AI? The E.U. Act

             Artificial intelligence (“AI”) is the craze. From pet shop owners to Fortune 500 companies, AI is being used to farm data and analyze facial gestures of consumers. But concerns are growing. The U.S. Surgeon General, for instance, has suggested a warning label be affixed to social media --  which often uses AI. As a result, the E.U. recently passed an Act regulating AI. The Act was adopted on March 21, 2024.

            Among other things, the Act demarcates between AI and traditional software. It also sets forth jurisdictional contours for regulating AI “output” in the E.U. But can the line between AI and yesterday’s software really be clearly drawn? What are some difficult jurisdictional type questions regarding AI? What about fair use of data? These matters are relevant for entrepreneurs and companies seeking to comply with the Act.

            Highlights about the Act and differences with U.S. law were discussed during a recent presentation for the California Lawyers Association. This entry is a synopsis of that presentation and some of the legal issues that were raised. Below is a hypo. I then go on to address some of the topics that were discussed.

            I.          Hypo

           Where’s Waldo LLC (“Waldo”) is a CA based digital advertising company. Waldo helps companies with their digital marketing strategies and developing their content. The company licenses its technologies to various customers. It just got funded and you are its Chief Operating Officer. Wonderful news! The company’s software – Find Waldo – and associated ChatGPT AI – Waldo AI – are marketing tools. The software manages private consumer data and learns from it to format new categories of data that can be farmed for marketing purposes. Nowhere do you require licensees to encrypt or otherwise protect this data when using the software. Waldo AI transforms this data into enticing written marketing content for your business and assists customers in finding products they need. It also scrapes the internet from other sites to write stories and subconscious messaging to consumers – including birdie sounds. It has been very successful to date in its conversion rate.

          But Waldo just received a notice from the E.U. AI Office. Not so wonderful.  In the Office’s view, you’ve been doing business in France since 2019. Turns out, your licensee, O’Neill Marketing (“O’Neill”), has been sublicensing Waldo’s technologies to marketing companies in France. According to the license between Waldo and O’Neill: the licensee cannot sublicense or use outside the U.S.; California law and exclusive jurisdiction apply; Waldo waives all warranties of infringement, merchantability, and fitness for a particular purpose. Unbeknownst to Waldo, O’Neill has been using the software to manage some data for Parisian residents without their consent, ChatGPT to write stories, and an AI component to subliminally message consumers with pleasant eating pastry sounds to get consumers to buy things they wouldn’t otherwise buy – like California-made gluten-free French bread.

            II.        Issues

                      A.            Jurisdiction

          Article 2 of the E.U. Act is clear. It gives the E.U. the right to regulate AI “[p]lac[ed] on the market . . . in the Union.” The Act further gives the E.U. jurisdiction over providers and users of AI systems that are located in a third country, where the “[o]utput produced by the AI system is used in the Union.”

         Some issues are raised by this language and the agreement above. Does it matter that the license restricts deployment? Is Waldo a “distributor” under the Act? And does foreseeability of the distribution of the product matter? In the example above, O’Neill was not authorized by the license to use Find Waldo or Waldo AI in the E.U.

         Finally, the parties’ agreement provides that California law and jurisdiction will apply in. It remains a question of whether this will be enforced in contravention of E.U. law.

                      B.        Software versus AI

         Recital 12 of the E.U. Act distinguishes between AI and “traditional software.” Sometimes this distinction is easy. But, in others, it is not so clear. The Royal Institution defines regular computing to mean “predefined instructions being pre-programmed to allow for the processing of data and productive of desired outcomes.” Whereas AI  “[m]achines learn, adapt and make decisions based on data without explicit programming.” While this difference is more clear between software and generative AI, it isn’t when comparing standard AI. Plus, some software can self-learn, as with Find Waldo. As such, how Recital 12 will apply to Find Waldo is unclear.

                     C.        Manipulative techniques

         Recital 29 of the Act prohibits subconscious messaging or influencing. It discusses how there are “materially distorting” effects on the mental functioning of users as a result of such methods. In the hypo, Waldo AI uses bird techniques to influence purchasing decisions. This is likely prohibited. But the question remains about other subtle messaging using AI, like images or text that aren’t so obvious. This implicates the disclaimer of warranties between the parties. It is an issue of whether this disclaimer would protect Waldo from the E.U. AI Office when there is misuse by O’Neill in France.

                     D.        High risk

         High risk uses of AI get different treatment. This is covered in Recital 75 of the Act. “High risk” is elsewhere defined as systems that are used in, among other things, toys, aviation, cars, management or critical infrastructure, education or vocational training, or law enforcement. With Waldo AI, there likely isn’t a “high risk use.” But the question is whether uses that influence consumer behavior to purchase potentially harmful products can be considered “high risk.”

                     E.         Ownership

         Can O’Neill own the works it creates using your ChatGPT? This is still an outstanding issue. In the U.S., AI cannot generally be the owner for work – including ad text. The same is generally true in the E.U. Whether the scraped data is fair use has not yet been decided. In the E.U., there is generally no fair use. So, when doing business there with your AI products, these nuances should be considered.

        Ownership of AI creations is tricky. In a recent decision from the United States District Court for the District of Columbia, the court found that wholly created AI works cannot be copyrighted. Others in the legal community disagree with this – they argue for a more permissive approach to AI ownership in the U.S., as argued in this amicus brief from Thaler v. Vidal, et. al., a case pending in front of the U.S. Supreme Court. That being said, ostensibly, those fancy designs and graphs created by Waldo AI aren’t likely subject to copyright in the District of Columbia. Whether and how to get around this in California is another story. The moral is that you’d need to think about this before scaling Waldo AI in the U.S.

        In the E.U., the degree of human input into Waldo AI’s designs is relevant. While AI cannot be listed as the author of your Waldo AI design in the E.U., whether a human could be listed is another question. In the E.U., the “degree of human intervention is” and “autonomy of AI” should be taken into account. Whether the limitation of warranty provision would protect Waldo here is also an open question.

            III.       Conclusion

        The Act aims to rein in the uses of AI. Whether and how it will affect U.S. businesses seeking to scale into the E.U. remains to be seen. But, given the foregoing language, due consideration should be paid to the differences between the jurisdictions.

        A modified version of this article is forthcoming in California Lawyers Association, New Matter:






Add new comment