“Tool Without a Handle: Mutual Transparency in Social Media”
“I wish that for just one time
You could stand inside my shoes
And just for that one moment
I could be you”
Bob Dylan – “Positively 4th Street”
----------------------------------
This blog post addresses some commonly discussed issues with social media platforms: both the personal (who sees content I share and why?) [1] and the professional (how should industry and policymakers respond to questions related to algorithmic selection of news feeds and online content?).
As with other posts, I write from the perspective of information technology as “tools.” It’s been an implicit theme of this blog since the beginning that Internet tools work best when seen as such; when designed to further personal agency, accountability and intention. Even if it is not descriptively complete, it is normatively useful to consider Internet technologies as “tools you use,” not “a place you go.” Social media platforms should, in this light, be oriented towards enabling sharing by users that is directed and mutual, and less towards enabling promotion of content presumed popular.[2]
Several observers of issues related to “fake news” have pointed to the Facebook News Feed – and its preference to show content confirming the biases of users – as detrimental to civic discourse and global community,[3] something Facebook has acknowledged and is seeking to address.[4] Considering these “news feeds” as tools can help address common concerns with social media and similar platforms, such as its use for propaganda. Users and manipulators are hacking social media platforms for their own ends; in some cases, malevolent ends and in others simply political but distorting nonetheless.[5] The proper response is a form of “hacking back” [6] or, less colloquially stated, to redesign the tools to cultivate virtuous uses, and to reduce the ability and incentives to use the tools to skew information.
This is an important concern from the perspective of service to users as well. I’ll wager every user of a social media platform has been puzzled why one shared item garners broad response while another, which seems equally likely to generate interest, garners little or none. Social media platforms do offer some tools to allow customization of content displayed on a user’s own home page – for example, Facebook allows users to choose between “Top Stories” (content with which others interact most) or “Most Recent” (self-explanatory), as well as offering a “Ticker” feature to see what your friends are posting, item by item, and other configuration preferences.[7]
But these are primarily tools for controlling what one receives, as opposed to tools for controlling where one’s content appears and/or ensuring the full flow of a creation is received. Possible I’ve missed an innovation, but I’ve observed a relative dearth of tools for controlling what is seen prominently by others (as opposed to simply managing the scope of the audience, e.g., friends or “friends of friends,” or “public.”). Settings exist for how to view and adjust preferences for what you see, but there are few comparable settings to influence who sees your posts within the organic News Feed.[8]
This may well be for legitimate business reasons – this approach makes more attractive features where Facebook may promote or advertise certain content in return for direct compensation.[9] It may also be that, either intuitively or through actual testing, platform providers have concluded such tools could too easily be misused, given the state of competition for attention. Nonetheless, if the mission of the company is to enable “sharing” - as opposed to simply passive viewing and reacting - then it’s suboptimal to have little ability to control what you share and where it is seen.
Better tools for controlling where content appears could also help address concerns with the impact of social media user activity on mental health. A new report by the UK’s Royal Society for Public Health (RSPH), an independent charity, attempts to survey user reactions to consumption of social media information, including emotional issues related to personal identity, expectations, feelings of inadequacy, and “fear of missing out.”[10] These issues have much more to do with human psychology than with the design or operation of Internet tools, but to the extent design choices can contribute to better mental health, they should.
And the place to start is to recognize that human flourishing is better served by fostering a sense that one’s own creations matter.[11] It is more important to emotional health that people create and be heard than that they saw or experienced the latest, greatest stimuli (especially stimuli pre-orientated to a person’s presumed interests and biases). “I did that,” and “you get me” are more rewarding emotionally, over the long term than “I was entertained.”[12] By the same token, cultivating a sense of personal control is often identified as a key component in achieving better equity and opportunity for persons and groups who may yet still be disadvantaged by social conventions or stereotypes.[13]
Hence, some modest proposals for social media platforms:
1) Allow account holders to send out invitations to view their own entire news feed, as if the viewer were that account holder. A recipient would need to opt in by accepting the invitation;
2) Allow account holders to subscribe to news feeds who have opted to be open to general subscriptions (e.g, a Twitter follower could see not only the tweets of the person followed, but see all the tweets displayed to the person followed (excepting for tweets from accounts who screen/approve followers);
3) Allow account holders to signal the algorithm who they think would like to see the post – an indirect way to promote a given post. That is, rather than tagging individuals directly (“this post involves you”) directly, users could speak to the algorithm (“I think these people would like this post”) with the intent of nudging and training it based on user input.
These innovations would create for users more options and control over what is shared while not diminishing control over what is received. Locating more control in the hands of those who share content is a preferable approach to addressing concerns with “bubbles” and offensive content than locating more control in the hands of the platform host. Indeed, some have called for treating social media platforms as publishers responsible for the content they host.[14] I believe quite the opposite; classification of social media companies as publishers would severely constrain the ability of users to share and direct view and information, as legislators in the US[15] and the EU[16] have long recognized.
Some thoughts about how these innovations would work:
1) It would be reasonable for social media platforms to first pilot the ideas to see how they work in practice;
2) None of this would involve any change in privacy settings or privacy control: if a follower of a news feed was not a “friend” then posts in the feed available only to “friends” should not be displayed;
3) Whatever algorithm is used by the platform does not have to change artificially (though it is contemplated that it will change organically over time as it better learns from user interactions with the service);
4) Account holders could at any time, rescind invitations or block users from seeing their news feed;
5) If one user extends an invitation to “see what I see,” the algorithm could factor in whether the invitation is accepted or declined in making future decisions as to what to display to the recipient.
These innovations could create several benefits for platform users and providers alike. In addition to the benefits noted above with respect to user sharing and emotional health, those exposed to misinformation in their own news feed may, by viewing other news feeds, be exposed to better information in ways that are less likely to be rejected – e.g., where a trusted friend’s news feed shows credible sources with different views, the urge to reject those views may not be as strong.[17]
There would likely also be benefits for the platform providers themselves, in the form of increased site visit time involved in checking more than one news feed, reduced pressure to expose details of proprietary and competitively sensitive algorithms, and the ability to retain a legal and commercial identity as a platform, rather than as a publisher.
Finally, thinking about these innovations matters because questions about algorithms will not stop with those that power news feeds or similar features of social media platforms. A wide variety of practical and ethical questions exist regarding artificial intelligence (“AI”) systems, including those known as “pervasive autonomous systems.”[18] Wide use of algorithms in such systems feels inevitable and accordingly the question becomes how to best ensure appropriate transparency for their decision-making processes and outcomes.
As technology ethicist Shannon Vallor observed, it’s insufficient to think of technologies themselves as susceptible to a right/wrong analysis, or to frame questions as simply how we can avoid the threat of technology to humanity.[19] Rather, there are right/wrong ways of using technologies, which should then inform designers and operators as to features that encourage such uses. Here, where it’s infeasible or unfair to insist on unlimited transparency for proprietary technologies (which could undermine both competition and service security), the next best option is fuller, consensual transparency as to the various outcomes or outputs from such technologies.
Consider an AI tool designed to generate answers to common commercial requests in areas historically susceptible to unlawful discrimination, e.g., shopping for homes or employment searches. Whether the AI tool runs afoul by engaging in discrimination – or even in behavior undesirably bordering on discrimination – would require empirical research (if the conditions were met to allow for such),[20] as has been done for online advertising delivery.[21] And, accordingly, a wide variety of scholars, regulators, and technical experts have for some time urged greater analysis of algorithmic technologies.[22]
By the same token, other methods can also help generate useful data, as well as a greater sense of trust in AI tools. Teaching AI to, in effect, explain itself is one worthy – though possibly daunting - approach.[23] Allowing users to consensually share multiple outputs from AI-enabled tools, not just those tailored to their own interests, is another, hopefully simpler, answer to these questions.
[1]See, e.g., “News Feed: My posts aren't appearing on my friends' news feeds,” https://www.facebook.com/help/community/question/?id=642287352507890; “My posts aren't appearing in News Feed.” https://www.facebook.com/help/146396262099283; “Who can see a story in their News Feed about something I share?,” https://www.facebook.com/help/225435534134033?helpref=related; “Types of Tweets and where they appear,” https://support.twitter.com/articles/119138; “Why Aren’t My Tweets Getting Liked or Retweeted?,” http://follows.com/blog/2016/03/tweets-arent-liked-retweeted.
[2]In addition to addressing common concerns with social media, this approach better aligns social media platforms with the “conduit” immunity they enjoy as an “interactive computer service,” under 47 USC § 230 – where these companies are rightly not treated as the publishers or speakers of content they host.
[3]See, Farhad Manjoo, “Can Facebook Fix Its Own Worst Bug?,” New York Times Magazine (25 April 2017), https://www.nytimes.com/2017/04/25/magazine/can-facebook-fix-its-own-worst-bug.html?
[4]https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634; see also https://www.theverge.com/2017/5/24/15685930/facebook-trending-topics-news-update-mobile (“Facebook announced an update to its Trending Topics section that will make the list of news stories easier to parse on mobile with a more diverse list of sources”); https://www.bloomberg.com/news/features/2017-05-25/how-facebook-can-fight-the-hate (Facebook distributing grants and advertising credits to anti-extremist organizations, with the goal of helping activists produce counternarrative and antihate campaigns)..
[5]See, e.g., Sue Halpern, “How He Used Facebook to Win,” New York Review of Books, 8 June 2017 issue; online at http://bit.ly/2rMkhPz
[6]See danah boyd, “Hacking the Attention Economy,” Data& Society blog, https://points.datasociety.net/hacking-the-attention-economy-9fa1daca7a37
[7]“News Feed Settings,” https://www.facebook.com/help/964154640320617/?helpref=hc_fnav; see also https://support.twitter.com/articles/164083#settings (instructions on configuring Twitter timeline settings).
[8] One can, of course, tag a specific person in a post to indicate you want them to see or interact with it, but that feature logically seems appropriate to limit to cases where a post is of particular interest to that person, involves a photo of that person, etc. It would reasonably be seen as a breach of etiquette to post an item and tag several dozen friends simply to ensure it organically appears in their news feed.
[9]Thus, for example, only advertiser “Pages” can “boost” content; that feature is not available from personal profile pages. https://www.facebook.com/business/help/347839548598012?helpref=related
[10]https://www.rsph.org.uk/our-work/policy/social-media-and-young-people-s-mental-health-and-wellbeing.html
[11]See, e.g., http://ideas.ted.com/why-were-so-attached-to-our-own-creations-even-when-theyre-ugly/ (“we are strongly motivated by the need for recognition, a sense of accomplishment, and feeling of creation”).
[12]E.g., Abraham Maslow put ‘self-actualization” at the pinnacle of his analysis of human motivations. https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs
[13]See “the Empowerment that Comes with Taking Responsibility and Taking Control,” Independent Women’s Forum blog, http://www.iwf.org/blog/2803771/The-Empowerment-that-Comes-with-Taking-Responsibility-and-Taking-Control#sthash.VDQtmsYK.dpuf
[14]“Stephen Fry: Facebook and other platforms should be classed as publishers,” The Guardian, 28 May 2017, http://bit.ly/2rd5dK9.
[15]47 USC § 230. A federal court recently applied Section 230 to dismiss a lawsuit against Facebook charging that it had supported terrorist organizations by allowing such groups to use its platform. The court agreed that Section 230 applied, and barred lawsuits against an interactive computer platform for harmful content posted by others, or for decisions as to which content to remove. http://bit.ly/2qFagCu
[16]Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 (“E-Commerce Directive”), Articles 12-15 http://bit.ly/2qsXioH. There is, notably, controversy as to what extent the E-Commerce Directive’s protections against liability for conduits and hosts fully apply to Facebook, especially in particular cases involving content moderation by Facebook. See http://eulawanalysis.blogspot.com/2017/01/when-is-facebook-liable-for-illegal.html
[17]See “Rumors and Health Care Reform: Experiments in Political Misinformation,” British Journal of Political Science, Volume 47, Issue 2, April 2017, pp. 241-262; online at: http://bit.ly/2rM2b0c
[18]See, e.g., https://medium.com/berkman-klein-center/some-starting-questions-around-pervasive-autonomous-systems-277b32aaa015
[19]Shannon Vallor, Technology and the Virtues (Oxford University Press, 2016), at 31.
[20]See Urs Gasser, “Autonomous Systems — Is it time for empirical research?,” in Medium, May 17, 2017 https://medium.com/mit-media-lab/ai-ethics-and-governance-is-it-time-for-empirical-research-7d566316ebf8
[21]See, e.g., Latanya Sweeney, “Discrimination in Online Ad Delivery,” January 28, 2013, online at: /content/files/ftp/arxiv/papers/1301/1301.6822.pdf
[22]See, e.g., Pew Research, “Code-Dependent: Pros and Cons of the Algorithm Age,” online at: http://www.pewinternet.org/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/; Frank Pasquale, The Black Box Society (Harvard University Press, 2015), http://bit.ly/1Pv3XDD; “Scalable Approaches to Transparency and Accountability in Decision-making Algorithms,” Commissioner Julie Brill, Federal Trade Commission (February 28, 2015); online at: https://www.ftc.gov/public-statements/2015/02/scalable-approaches-transparency-accountability-decisionmaking-algorithms.
Questions have also been raised in the competition law context with respect to pricing algorithms. See, e.g., Terrell McSweeney, “Algorithms and Coordinated Effects,” Remarks at University of Oxford Center for Competition Law and Policy, May 22, 2017; online at https://www.ftc.gov/public-statements/2017/05/algorithms-coordinated-effects;
[23]See Slate, “Artificial Intelligence Owes You an Explanation,” Future Tense May 8, 2017, http://slate.me/2pWzyvn