The Promise and Peril of Personalization

Authored by Brett Frischmann and Deven Desai

Google, Amazon, and many other digital tech companies celebrate their ability to deliver personalized services. Netflix aims to provide personalized entertainment. Advertising companies suck up data so they can deliver personalized ads. Financial services, insurance, and health care companies seek to use data to personalize their services. Faith in personalization is so strong, that some legal scholars now advocate for personalized law. And why not?

Personalization makes everyone feel good, like you’re being catered to. Heck, who really wants a standardized product sold for a mass market? We’ll tell you who.

Governments, social engineers, educators, administrators, advertisers. They want YOU to be standardized. That’s the paradox. In this new world, everything seems personalized to you, and at the same time a new, standardized thing is being made. Wake up. You’re the product.

You’re participating in what economists call a “multi-sided” market. Stop and think about what’s being made and what’s being sold to each side. In these markets, there are buyers, sellers, and market-making intermediaries that sell something to everyone. One of the things the intermediary sells is you.

I.               Understanding Personalization

Like most things, personalization can mean different things, take many forms, and be good or bad for different folks. It’s helpful to run through a few different examples, starting simple and gradually increasing the complexity.

Let’s start with a classic example—the tailor.  Imagine getting a custom, tailored dress or suit. Someone takes your measurements, perhaps more than 20 of them. They pay attention to how your body is different than everyone else’s. They know and appreciate that you are unique. Next, they send the data off to folks who cut and sew your outfit. Then they call you back to the store to make sure the fit is correct and adjust it if need be. In the end, you have not just a new dress or suit. You have an outfit that moves with you, makes you look good, and you probably feel great each time you see how well your clothing fits and moves with you. Your outfit is personalized.

To generalize, this example involves A sharing personal information with B so that B can use the information to customize a product or service to satisfy A’s needs (preferences). Another decent example along these lines is the conventional doctor-patient relationship. Patients provide doctors with personal information that enables the doctor to tailor diagnosis, advice, treatment, and so on. For both the tailor-customer and doctor-patient examples, personal data is an input used to improve an output (dress, suit, medical treatment) such that the improvement directly serves the interests of the person whose information is being used.

Now, let’s consider an example of a different form of personalization, called price discrimination. Price discrimination is where B sells the same product to different people at different prices. B uses personal information about customers to personalize prices, and this business practice allows B to extract more money from consumers. Economists debate about the net welfare effects of price discrimination and the notion that some consumers may be better off and others worse off. But that is not the point for this discussion.

The point is B uses personal information to customize prices only to the extent that doing so furthers B’s interests, namely getting A to pay the most A is willing to pay. In general, B uses personal information about A to customize something but primarily to further B’s interest. In short, the benefits of personalization go to B.

Now imagine personalization designed to serve a social goal, for example, encouraging donations after a disaster. People may have different motivations to act. For example, one person may be moved to donate because of public health concerns while another person may be moved by the thought off people starving. B may send messages customized to each person’s different motivations. In this example, and others like nudging to promote voting or environmental protection, personalization primarily benefits third-parties—neither A nor B, but instead C.

These models show that personalization is just a tool, a means, and that we need to pay attention to what is being personalized and to what and whose ends.

II.            Personalization of and for Whom?

In many cases, it is very difficult to judge personalization because it can fit any of the models we’ve discussed. For example, let’s look at nudging, an ascendant form of social engineering. Nudge designers (called “choice architects”) aim to improve decision-making in contexts where humans tend to act irrationally or contrary to their own welfare. Leveraging insights from behavioral science, choice architects use low-cost interventions to help people make better choices in important policy areas like personal finance and health care.

Not all nudges work the same way. Some involve personalization; others do not.

A standard example of a non-personalized nudge involves retirement planning. An employer could (i) leave it to employees to set-up their 401K plans and decide how much to save or (ii) set up the plans by default so that a predetermined amount is saved automatically and allow employees to make adjustments. Saving by default is an architected choice that relies on two facts: first, people often fail to set up a retirement plan, which is a social problem, and second, people tend to stick with default rules. Thus, by choosing option (ii), the choice architect nudges people to start with the better position for them and society.

Personalized nudging, however, might work differently. Imagine choice architects gain access to treasure troves of personal data to customize the nudges they design. This could mean one of two things. Corresponding with the first model, it could mean that choice architect B uses data about A’s personal preferences and values to shape the nudge’s objective. For example, B might personalize the default saving rule for various employees, perhaps by better matching the initial savings amount to individuals’ personal profiles (e.g., based on age, personal discount rates, health, etc.) or even by customizing default investment allocations (e.g., based on predicted risk tolerances). The idea is simple. Like our custom tailor, the choice architect B could help A achieve outcomes that B knows A wants. Such intimate knowledge of A’s mind can be quite powerful, but frankly (and thankfully in our minds), it’s incredibly difficult to obtain. But this is not, to our knowledge, the usual meaning.

To the contrary, the underlying objective of personalized nudging is still to induce rational behavior. What constitutes rational behavior is not itself personalized. It remains a general, seemingly objective standard. Personalization helps B identify and overcome the specific impediments and obstacles to rationality faced by A. B can custom fit the nudge—the stimuli that shape the choices perceived by A—to engineer a rational response.

Personalization empowers choice architects, not the human beings subject to nudging. Personalized choice architecture means using stimuli that are personalized to achieve the relevant end of rational responses. In contrast with price discriminating suppliers in our second simple model, choice architects do not pursue their own individual interests; instead, they continue to pursue an idealized conception of rational actors and what they, in theory, should want. In a sense, this blends the second and third models because choice architects pursue their vision of a social good.

Another complex example is personalized education. This has been a hot trend for many years. The idea of tailoring educational services to the needs of different populations and individual students seems a laudable goal. It can help overcome significant distributional inequities. Yet, as Neal Stephenson’s 1995 science fiction classic, Diamond Age, shows, personalized education can have more than one goal. In the novel, there is a special book, the Young Lady’s Illustrated Primer that provides a personalized education. Bethanie Maples describes the Primer as “an artificial intelligence agent for human learning/cog dev. Kind of the silver bullet for education.” We’re nowhere near that ideal. But we’re headed in that direction, and it’s worth reconsidering whether that’s the path we should be on.

Even Stephenson’s fictional Primer had another, hidden agenda. As Professor Meryl Alper sums up, the Primer’s overall design was to teach the student the designer’s view of how the world should be. Each student’s personalized education was geared to encourage a little rebellion against society, but only as a means of getting them to return to their tribe (Stephenson, 1995, p. 365). The customization served the designer’s ends, not necessarily the students’.

As the tech sector has infiltrated education, it promises data-driven, tech-enabled personalization tools. We must ask, however, what is being personalized and to what and whose ends. The market dynamics are quite difficult to unpack. It often seems that in addition to or despite the interests of school children, technology companies leverage personalization in their own interests—whether by collecting data, testing proprietary algorithms, or simply extending their brands to the next generation of impressionable consumers. When you dig into personalized education, you realize that it’s not so easy to determine what’s being personalized or for whom.

Personalization tools could help teachers handle the range of capacities in their classes better. As with personalized nudging, this could mean setting personalized goals tailored to each student’s current capacities and potential outcomes. For example, students with special needs often require highly personalized lesson plans tailored to their specific goals. Personalized lessons might allow schools to engage each student intellectually and yet keep them with their age and emotional cohort.

Yet for many public-school students, personalization seems more focused on personalizing the stimuli (lessons, educational materials, practice problems, etc.) necessary for students to satisfy standardized tests and other outcome measures. Students also vary along many dimensions. For example, some 8-year olds may be better at math than reading and vice versa. Current schooling tends to lump these kids together and expect them all to learn and do well at the same pace—or at least, to reach the same benchmarks by the end of each school year.

Education technology companies are often Janus-faced. They offer free or heavily discounted tech to schools, and they do so with what seem to be public-minded, even altruistic intentions. That is one face. Sometimes, the other face is their profit-maximizing side. They’re collecting data, training algorithms, building brands and future consumers. Sometimes the other side is not profit-maximizing but is instead ideological, ranging from faith in data-driven solutions to fetishized computer interfaces. For example, when the non-profit Gates Foundation promotes its ed-tech agenda, there’s a deep-seated ideological commitment, a determinist faith in technology—comparable to the choice architect’s commitment to the ideal of rationality. Simply put, there’s almost always another side, hidden but relevant.

Much of the digital networked economy involves the same type of behavior. Consider the following model of a commercial website:

A shares personal information while interacting with a website owned by B. B collects the information, and so does C, an advertiser with whom B has a side-agreement. B uses A’s personal information to customize the content on the website to satisfy A’s personal interests, and C delivers customized advertisements to A. For example, A’s geolocation data allows B to customize its news feed and C to deliver local ads.

So far, so good. This model seems to follow from our first

model, because A and C are using A’s personal information to personalize goods and services to satisfy A’s personal interests. This is usually how digital tech companies and the advertising industry explain their businesses.

But might the scenario better fit the second model? Might B and/or C pursue their own interests, seeking to extract benefit irrespective of A’s interests? If we focus on the side-agreement between B and C as well as other potential uses of the data collected by C, we can find support for this view. In many real-world cases, B has many side-agreements with Cs, Ds, Es, etc., and these companies may have their own series of side-agreements. Read Facebook’s Data Policy and see if you can identify the various companies, and then see if you can track down their affiliates and partners. If you thought Cambridge Analytica stole personal data from Facebook users, you’re wrong. Cambridge Analytica obtained data pursuant to an agreement it had with Facebook. Also, if you thought Cambridge Analytica was an exceptional case, you’re wrong. It was normal, just one of many similarly situated companies. Finally, if you think what we’re describing is just a Facebook problem, you’re wrong again. Check out any major commercial website, and there’s a very good chance you’ll find similar privacy policies and notice of side-agreements.

Usually, these side-agreements are effectively hidden, meaning that A may not know they exist, much less what they say about how A’s personal data will be collected, managed, exchanged, secured or used. (Did you actually find any side-agreements that you could read at Facebook or elsewhere?) Bear in mind that the agreements may be effectively hidden in broad daylight. For example, Kinsa, a company that makes digital thermometers and works closely with advertisers, tells you that it will share data, “When we are contractually obligated to disclose it.” The possibility of side-agreements is transparent, but so what? Even if disclosed, it may be impossible for A to gain meaningful knowledge of and about side-agreements due to their volume, density, and complexity. As Frischmann and Selinger analyze in Re-Engineering Humanity, this is the same problem we face regularly with terms of service contracts and privacy policies for websites and apps generally.

The multi-sided nature of these markets complicates the analysis is two obvious ways. First, there are just more parties to keep track of and correspondingly more, sometimes competing, ends. Second, and more perversely, people and their information are often products.

III.          How Personalization Can Lead to Homogeneity[i]

“People are products” is now cliché. That tells you something is wrong. It implies normalization of some rather heinous ideas. To buy and sell people rings of slavery. It reduces people to things, objects, resources, or mere means. As Desai wrote, “Treating a person like a resource is [a fundamental] error.” Somehow, magically to our minds, these negative associations fall away when the medium of exchange is digital data and human attention.

We need to examine the role of personalization in programming people as products. So, let’s consider how personalization of inputs—stimuli, choice architecture, message, inducement—enables standardization of outputs—response, behavior, beliefs, and perhaps even people.

A few familiar examples show how personalization can be lead to homogenous behavior. Suppose we’d like to induce a group of people to behave identically. We might personalize the inducements. For example, if we’re hoping to induce people to contribute $100 to a disaster relief fund, we might personalize the messages we send them. The same applies if we’re hoping to nudge people to visit the doctor for an annual check-up, or if I’m hoping to get them to click on an advertisement. Effective personalized ads produce a rather robotic response—clicks. Simply put, personalized stimuli can be an effective way to produce homogenous responses.  

This personalized-input-to-homogenous-output (“PIHO”) dynamic is quite common in the digital networked environment. What type of homogenous output would digital tech companies like to produce? Often, companies describe their objective as “engagement,” and that sounds quite nice, as if users are participating actively in very important activities. But what they mean is much narrower. Engagement usually refers to a narrow set of practices that generate data and revenues for the company, directly or via its network of side-agreements with advertisers, data brokers, app developers, AI trainers, governments, and so on.

For example, Facebook offers highly personalized services on a platform optimized to produce and reinforce a set of simple responses—scrolling the feed, clicking an ad, posting content, liking or sharing a post. These actions generate data, ad revenue, and sustained attention. It’s not that people always perform the same action; that degree of homogeneity and social control is neither necessary for Facebook’s interests nor our concerns. Rather, for many people much of the time, patterns of behavior conform to “engagement” scripts engineered by Facebook.

A very similar story can be told for many other platforms, services, and apps. Of course, the business models, strategies and even the meaning of “engagement” vary, but PIHO remains a consistent logic. It’s a potent blend of Frederick Taylor’s “scientific management of human beings,” B.F. Skinner’s operant conditioning, and modern behavioral engineering methods, such as nudging.

PIHO requires personal data and sufficient means for translating such data into effective inducement and reinforcement. In other words, whatever intelligence about a person gleaned from collected data must be actionable and impact the person’s behavior.

Some types of data are more useful than others for inducing and reinforcing the desired response/behavior, and the relative utility of different data types may vary across people and contexts. Not surprisingly, many digital tech companies collect as much data as possible, either directly from consumers or indirectly from data brokers or partners with whom they have side-agreements. They run experiments on consumers. They use various data processing techniques to identify patterns, test hypotheses, and learn about how people behave, who they interact with, what they say and do, and what works best in shaping their behavior to fit the companies’ interests.

For example, companies map users’ social graphs to learn about their relationships, including the strength of different influences on individuals. Just as a choice architect who wants to nudge people to vote or file tax returns might let them know how many neighbors have done so, a social media company can leverage social graph insights to induce people to login, create posts, read posts, share posts, and more. The stimulus might be simple, a personalized email to tell a user that a friend tagged them in a post. The goal is to get the user to the site (or the app), so that the user comments, or posts, or tags. The hope is that the user sees ads and clicks, reads and shares posts, plays games. In short, their ideal is that the user does anything that deepens the behavior of staying logged in and using the service. If that happens, the company has succeeded. It has induced and reinforced engagement.

When digital tech companies deliver personalized services and content, there is always a feedback loop. They’re constantly collecting data and learning about you to fuel the service. But that’s just the first loop to be aware of. Additional feedback loops cross sectors and span the networked environment; digital tech companies often have side-agreements with each other. Ever notice those Facebook, Twitter, and other buttons on websites you visit? Ever use your social media credentials to log-in to other sites? (If you really want to see feedback loops and how data flows, check out your advertising preferences on Facebook.)

What’s tricky about PIHO on digital platforms is the fact the personalized stimuli do, to some degree, satisfy the interests of users. In other words, personalization benefits users directly because they’re getting news, status updates, videos, and other stimuli customized to their preferences. Inducing “engagement” requires that users get something they want, after all. But that doesn’t mean the benefits of personalization flow exclusively or even mostly to users. Personalization makes it cheaper and easier both to serve and to script behaviors.

It can even go deeper than that. Feedback effects and repeated and sustained engagement can, but don’t necessarily, shape beliefs, expectations, and preferences. When coupled with design and engineering practices informed heavily by behavioral data (and not necessarily personalized), addiction, dependence, and a host of other concerns arise. In the moment, people may be satiated, but that doesn’t mean they like who or what they’ve become if/when they reflect on their behaviors.

Engagement could mean something more, something great for humanity, and digital networked technologies could pursue such engagement, but that’s not really what we get in our modern digital networked world. Digital tech could, in theory, personalize goods and services in a manner geared toward your interests. Instead, they mostly pursue their own interests and cater to those on the other sides of the market—that is, those who pay the bills—advertisers, partners collecting data and training AI, governments, etc. It’s just capitalism at work, one might say, and that wouldn’t be wrong. But that doesn’t justify the practice. Nor does it excuse the potentially dehumanizing consequences of treating people as products to be standardized. Digital tech companies adopt, and worse, perpetuate an impoverished view of humans as predictable and passive consumers who (can be trained to) behave according to standardized scripts. In that world, we are nothing more than programmable cogs.

We need to change the social script that not only permits but also enables and encourages such practices.

IV.         What to do about Personalization

We advocate two major steps to help address the problems of personalization.

First, abandon blind faith in personalization—especially faith that it works and that it works for you. Understand what’s being personalized and for whom. Ask critical questions, and of course, resist being reduced to a product.

Second, change the balance of power. This idea is, of course, a huge task that potentially implicates a wide range of policy interventions, from competition law to privacy. For now, let’s focus on contracts. Specifically, we suggest reforming contract law in those multi-sided markets where consumers are treated as products.

Faith in personalization is at an all-time high. Technology companies, advertisers, academics, everyday retailers like Wal-Mart and Target, app developers, venture capitalists, and countless start-ups all seem to have drunk the personal-data-spiked punch. They’re all collecting personal data. Why? They think they can put it to good use. Faith in personalization has gone so far that prominent legal academics advocate for personalized law.

In Personalizing Negligence Law, for example, legal scholars Omri Ben-Shahar and Ariel Porat, ask, “If Big Data is reliably predictive in high-stakes industries like financial services, insurance, and increasingly in medicine, why not utilize this predictive power in law?” The question is glib. The authors don’t engage the substantial ongoing debate about whether Big Data is reliably predictive, nor do they distinguish Big Data from personalization. They simply assume personalization works well. Yet in many situations, it doesn’t and for several reasons, ranging from biases in data collection to poorly designed software. Some folks take comfort in the prospect that better data gathering methods and continual updating of software will reduce problems. That may be an appropriate attitude in the context of online shopping. Although that’s debatable, it’s not appropriate for law. The stakes are dramatically different. We need to understand and address the problems directly and not extend personalization to our justice system based on faith in the tech and those developing and selling it.

More fundamentally, even if personalization works, we should critically question its extension into law (and other areas of our lives). It’s unlikely personalized law fits our first model: Customizing law to everyone’s personal ends often would undermine the very purpose of laws designed to apply generally and equally to all.

It is much more likely that personalized law will turn out to be a means for rent extraction by those doing the personalization and/or social control by the PIHO dynamic. For some advocates, the idea that personalized stimuli can produce homogenous behavior may be appealing because it would produce uniform and efficient compliance. For us, however, it is dystopian. Laws evolve over time in part because people have some freedom to challenge the status quo and take risks that may challenge those in power. Engineering rigid determinism into law risks eliminating such freedom. And so, just as we should resist reducing human beings to resources, we must resist reducing justice to a computational problem.

We concede that this is a debate worth having. But that will only happen if we abandon blind faith in personalization.

Reasoned debate is one thing, changing the balance of power is another. We advocate severing the contractual strings that make us puppets in the multi-sided markets where personalization thrives. Part of Reengineering Humanity focuses extensively on the string between users and the market-making intermediary (e.g., the consumer-facing website or app); the authors explain how the click-to-contract interface we’re all familiar with engineers homogenous behavior. Most people, most of the time, follow the script and comply by clicking. Cutting that string would require contract law reform, e.g., refusing to enforce automatic contracts (between A and B).

But what about the networks of hidden side-agreements? Hidden strings aren’t easily cut, and so a first step is bringing such agreements to light for examination. While transparency may be necessary, it will not be sufficient. As we previously suggested, the volume, density, and complexity of those agreements would still be overwhelming for most of us. Yet even if we could overcome that obstacle, a more formidable one remains—the absence of third-party beneficiary rights.

Suppose B (e.g., Facebook) has a side-agreement with C (e.g., a data analytics firm) and that side-agreement requires C to use state-of-the-art security. B’s customers—As (e.g., Facebook users)— are third-party beneficiaries of that security. If C breaches its contract with B and fails to use adequate security, B can sue C for breach of contract, but As cannot do so, even though they may suffer the harms (or fail to receive the benefits from security).

There are three reasons. First, courts generally disfavor recognizing third-party beneficiary rights in contract law. Second, we suspect most B-C side-agreements expressly disavow creating any third-party beneficiary rights. Third, most A-B agreements expressly disavow creating any third-party beneficiary rights. For example, Facebook’s Terms of Service state, “These Terms do not confer any third-party beneficiary rights.” Google’s Terms of Service state, “These terms control the relationship between Google and you. They do not create any third party beneficiary rights.” And so, you have no rights when it comes to deals between personalization companies and the companies with which they have side agreements.

Reforming the law to expressly recognize third-party beneficiary rights in these multi-sided market contexts and disallow contractual waiver of such rights would be an important step in shifting the balance of power. While this might seem radical, it is not. The European Union has adopted a similar approach to privacy regulation based on the priority of human dignity as a fundamental value.

 

Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University. His latest book is Re-Engineering Humanity (Cambridge University Press 2018).

Deven Desai is an associate professor at the Scheller College of Business at the Georgia Institute of Technology. Among other professional experience, He has worked for Google, Inc. as a member of the policy team. He has received research support as unrestricted gifts to the Georgia Tech Research Institute made by Facebook and Google.



[i] Part III of this essay was first published by Scientific American as How Personalization Leads to Homogeneity: Tech companies are perpetuating a bleak view of humans as programmable cogs, at https://blogs.scientificamerican.com/observations/how-personalization-leads-to-homogeneity/.

 

Add new comment