Stanford CIS

DALL-E Does Palsgraf

By Bryant Walker Smith on

A new article, written in 2022 and published in 2023 -- with pictures!

The article asks a leading AI tool for image generation to illustrate the facts of a leading law school case. It introduces machine learning generally, summarizes the seminal case of Palsgraf v. Long Island Railroad, presents images that the tool created based on the facts as the majority and dissent recount them, and then translates this exercise into lessons for how lawyers and the law should think about AI.

A few of its takeaways:

1. Humans, societies, and legal processes are also nondeterministic systems!

2. Our status quo is not perfect. It is also filled with frequently unacknowledged distortions, ambiguities, and uncertainties -- with which AI tools can force a reckoning.

3. AI tools are like funhouse mirrors: They can "exacerbate, mitigate, reinforce, or challenge" existing problems such as invidious bias and invidious discrimination.

4. Discussions of AI often overlook the dangers of overreliance, which are common in many forms of automation.

5. AI tools will become not just the authors of but also the intended audience for many communications.

6. In the future, "debates will be much less about whether systems should be human or machine and much more about whether these systems should be centralized or decentralized: Should there be a single DALL-E or a million?"

More here.