Patchwork AI Regulations

Where’s Waldo LLC (“Waldo”), your U.K. based digital advertising company, just got funded. Wonderful news! But it also just received a notice from the California Attorney General. Not wonderful. Why did you receive it? In the AG’s view, Waldo has been doing business in California since 2019. Along the way, Waldo’s AI has been scraping and storing digital personality profiles of 100,000 California residents. The software then creates visual representations of this data for licensing to brands like Nike. Nobody told you that California’s Consumer Privacy Act (“CCPA”) regulates the collection of such information. Until now. Read more to learn about some of the laws in the U.S. and beyond that regulate AI. 


There are various AI products that scrape data of consumers – including from websites. Amongst them is Browse AI. Whether and how these products are lawful depends on the jurisdiction you are in.

For example, California has two privacy laws. The first, California’s Consumer Privacy Act (“CCPA”), was passed in 2018. The CCPA only covers for-profit businesses that (a) do business in California, (b) collect personal information of consumers, and (c) satisfy one of the following criteria: (1) have annual gross revenues over $25,000,000.00, (2) annually receive, sell, or share personal information of 50,000 of more California residents or devices, or (3) derived 50% or more annual income from the selling of consumer information. The CCPA does not generally apply to non-profits – unless the non-profit is controlled by a covered business. More recently, California passed a second privacy law, the California Privacy Act (a/k/a “Proposition 24”).

There are various issues to address when doing business in California with AI. In Waldo’s case, one is going to be whether Waldo is doing business in California. There are various factors in determining this. It is best to consult with qualified counsel. The same is true of “personal information.” Some information is not personal. For example, information scraped from governmental sources is generally not “personal information” in California. In the event Waldo takes law licenses listed in California and uses this information for a mailing spree – such information wouldn’t be “personal.”

The same is true of Waldo’s dealings in other jurisdictions. In the E.U., there is the GDPR and the Data Protection Act in the U.K. How and in what ways these jurisdictions regulate your Waldo AI’s use of data is something to think about.


Ownership of AI creations is tricky. For example, in a recent decision from United States District Court for the District of Columbia, the court found that wholly created AI works cannot be copyrighted. Others in the legal community disagree with this – arguing for a more permissive approach to AI ownership in the U.S., as in this case in front of the U.S. Supreme Court. That being said, ostensibly, those fancy designs and graphs created by Waldo AI aren’t likely subject to copyright in the District of Columbia. Whether and how to get around this in California is another story. The moral is that you’d need to consult with qualified counsel before scaling Waldo AI in the U.S.

In the E.U., the degree of human input into Waldo AI’s designs is relevant. While AI cannot be listed as the author of your Waldo AI design in the E.U., whether a human could be listed is another question. In the EU, the “degree of human intervention” and “autonomy of AI” should be taken into account.


In the example above, assume that Waldo AI creates data designs that Nike then uses to age discriminate. Can your company be liable for its AI being used in this way? A recent decision from the U.S. Supreme Court in Google v. Gonzalez addresses liability of AI owners for offline illegal conduct. In Gonzalez, plaintiffs whose family members were killed in various terrorist attacks – including in Paris -- sued Google. The allegation: that social media websites like You Tube, which is owned by Google, enabled the crimes to occur by permitting algorithmic connections between the perpetrators. A big part of this is whether and how your technology enables such discrimination – an issue in the case. So the answer to the question will depend on a number of factors. But the point of this article is not to answer the question. Rather, it’s to raise it for your consideration when scaling in different countries.


When launching your or another company's AI product, due care needs to be taken. Not only are there software considerations as to satisfying your target market – here Nike’s advertising needs. But there are a patchwork of laws and regulations concerning, among other things, privacy, ownership, and liability that need to be tackled.

A version of this entry appeared on the AI Accelerator Institute's website.

Add new comment