Chapter 16. We’ve Been Bought by the Digital Revolution

  • Matt Popke
  • Sarah Wambold

Authors’ note: In November 2018, a group of approximately 25 people, including the authors, came together for a roundtable discussion called “Computer Lib/Nightmare Machine: Technology’s Impact on Cultural Communities” as part of the Museum Computer Network annual conference. This essay builds on that conversation.

The pervasive network of information that we connect to every day–often times from portable devices that we carry with us every waking minute of our lives–is watching us.

We freely post moments from our days on social media, often giving very little thought to the distance that information travels, or the transfer of ownership that takes place when we hit “Share.” We wear robots on our wrists to count our steps or monitor our heart rates, without thinking about the trail of habits we unwittingly expose.

We tell ourselves these tools help us to build communities or help us achieve our goals. We willingly trade our data for the promise of convenience and connection.

Museums make similar tradeoffs. Institutions invest money, time, and effort in, and consign their content to tools that promise organization, distribution, and reach. When the financial stakes are high they do their due diligence—gathering stakeholders, assessing platforms and frameworks, modeling workflows, researching peers. When the tools are free, there is less scrutiny.

In “Are We Giving Up Too Much?” Koven Smith calls for a moral and ethical reckoning of the digital systems museums use, citing the consequences of corporate values at odds with museum missions.1 His assertion is that until recently most museums have rarely ascertained a company’s ethics or paused to understand the nuanced outcomes that these complex systems trigger.

While the conversation about technological ethics may often be overlooked in the museum sector, these conversations have been going on for decades beyond our walls. When Whitfield Diffie testified before the U.S. House of Representatives regarding proposed cryptography legislation in 1993 he warned against creating a ubiquitous network of surveillance that would give the U.S. government the power to listen in on all private communications within and across its borders.

According to Diffie’s testimony, “No right of private conversation was enumerated in the constitution. I don’t suppose it occurred to anyone at the time that it could be prevented. Now, however, we are on the verge of a world in which electronic communication is both so good and so inexpensive that intimate business and personal relationships will flourish between parties who can at most occasionally afford the luxury of traveling to visit each other. If we do not accept the right of these people to protect the privacy of their communication, we take a long step in the direction of a world in which privacy will belong only to the rich.”2

In 1976 Diffie, along with Martin Hellman, developed the Diffie-Hellman key exchange algorithm that secures millions of internet services today. In the 90s Diffie joined the activist movement to prevent the Clipper Chip surveillance initiative from becoming the law of the land. The idea of granting our government the power to spy on on our private lives was considered unconstitutional and a dangerous overreach.

But we live in a world now where private companies can surveil us almost as deeply as was proposed by any legislation. Every website that we connect to keeps a record, at least temporarily, of every connection. This, in itself, is not so bad. Most web servers cannot relate those connection records to individual users or to a larger pattern of behavior. But there is a growing number of servers that can.

Facebook, like other social networks, knows something about the identity of every one of its users, and Facebook records all of the activities of the users on their networks. The theory is that the more Facebook knows about your habits and behaviors, the better they can predict what kinds of advertisements you are most likely to respond favorably to, which they can then sell to companies and political organizations.

This quest to better understand their users has extended beyond the reach of their own services. In February of 2009 Facebook introduced the “Like” button. Including the like button on a web page made it possible for a Facebook user to share that page on their social feed without needing to use Facebook directly. The idea behind the Like button is to make sharing easier and increase the likelihood of web page content being shared on Facebook. But for Facebook users who have not opted out, merely loading the Like button in your web browser without interacting with it is enough to tell Facebook what pages you are looking at.3 Moreso, Facebook tracks the behavior of users who do not have Facebook accounts on the Facebook site and also through social engagement widgets such as the like button.4

Facebook offered website owners a poisoned apple that was difficult to resist. Website publishers could have easy access to millions (now billions) of Facebook’s users through “likes,” and all it cost them was giving Facebook a record of every visit to their website. The Like button was widely adopted and then widely copied by other social networks with little thought to the potential consequences.

No one knows who will ultimately benefit from the data that is collected because the advertising can be sold to anyone: companies, political campaigns, government agencies. Should museums be comfortable participating in this vast privately owned network of surveillance? Should museums trust Facebook and other companies to use the data they collect responsibly? What have they done to earn that trust?

The End of Optimism and the Cost of Free

It’s easy to single out Facebook. The last several years have seen a cascade of revelations about the company’s practices that have made headlines around the world. But the problem is not limited to one company. Providing free services to users who are targeted with ads based on the surveillance of their activities has become a common business model for many of the most successful companies in the tech sector.

In an article for New York Magazine titled “An Apology for the Internet—From the Architects Who Built It,” journalist Noah Kulwin writes: “To keep the internet free—while becoming richer, faster, than anyone in history—the technological elite needed something to attract billions of users to the ads they were selling. And that something, it turns out, was outrage. As Jaron Lanier, a pioneer in virtual reality, points out, anger is the emotion most effective at driving ‘engagement’—which also makes it, in a market for attention, the most profitable one.”5

Lanier continues, “What started out as advertising morphed into continuous behavior modification on a mass basis, with everyone under surveillance by their devices and receiving calculated stimulus to modify them.”6

New research by Sabina Mihelj, Adrian Leguina, and John Downey looking at cultural participation and the digital divide argues that even as more communities are coming online, the gap in cultural participation is not closing. “Even if digital media become equally accessible to all socio-demographic groups, this does not mean that people from traditionally underrepresented groups will start using them to access publicly funded cultural content, even if such content is made freely available online.”7

Their research looks at the compounding effects of the first-level digital divide (lack of access to the internet) with the second-level digital divide (skills or knowledge gaps) while acknowledging the impact of market-driven incentives at the foundations of online tools. “A large majority of search engines and recommendation systems that operate in this environment, and which shape citizens’ digital cultural diets, are driven by commercial considerations rather public interests. As such, they operate on the principle of market segmentation, seeking to tailor recommendations to specific niche markets rather than aiming for universal access.”8

Kulwin points out that the effects of market segmentation in an online context has antithetical impacts in the pursuit of creating community.

“The advertising model of the internet was different from anything that came before. Whatever you might say about broadcast advertising, it drew you into a kind of community, even if it was a community of consumers. The culture of the social-media era, by contrast, doesn’t draw you anywhere. It meets you exactly where you are, with your preferences and prejudices—at least as best as an algorithm can intuit them. Microtargeting’ is nothing more than a fancy term for social atomization—a business logic that promises community while promoting its opposite”9

A Call for Humanity

What should we, as museum technologists, do? There is value in our current “best practices.” We accept that marketing and promotion are necessary tasks for our organizations. And we acknowledge that these activities need to be taken online as much as anything else. Being able to evaluate the effectiveness of those efforts helps us steer those efforts in the most effective directions.

It is difficult to tell stakeholders in non-technical departments that we want to take valuable and useful tools away from them. That task is harder when the tools are ones that we may have approved of or even advocated for in the past. But that should not stop us from reevaluating our current practices.

The first step could be one of outreach and education. We need to speak openly with our museum colleagues about the ethical issues we face as institutions in order to define a common set of values that can guide our decisions.

Executive Director of the Institute for the Future Marina Gorbis said, “We need technologists who understand history, who understand economics, who are in conversations with philosophers. We need to have this conversation because our technologists are no longer just developing apps, they’re developing political and economic systems.”10

Gorbis’ call for technologists to listen to and learn from experts in the humanities is one that museums can respond to. Museums are fortunate when compared to tech startups in that they already have humanities professionals on staff. Museums are uniquely prepared to establish an open dialogue between experts that the rest of the technology sector could learn from.

Once we have defined a set of shared institutional values we, as museum technologists, can use those values to help us evaluate the tools available to us and determine their fitness to our institutional principles. When we find that the tools we are already using do not fit, we can look for alternatives. It is likely that ethical alternatives to some of our tools do not yet exist, but they never will if we do not start asking for them. We may have to make tradeoffs in the short term while we work toward a more comprehensive end goal.

We may also be surprised by the relatively low sophistication of the free, surveillance-supported tools we use. Many of us have relied on third-party tools for so long, we’ve never investigated what level of effort they represent. We may be pleasantly surprised to discover that much of what we use these tools for can be easily replaced.

Museums are also well positioned to include the voices of our constituent communities in this dialogue. And we should make efforts to include those members of the community whose voices have not typically been heard. The negative effects of networked technology have an outsized impact on communities of color, those with disabilities, and those with low incomes.11

As we expand our efforts to include these underserved communities in our programming we must take care to not contribute to systems that exploit them. The ethical concerns we face in our technology departments—of which participation in surveillance is only one—should be considered a component of the larger systemic issues that our institutions are beginning to address as we seek to become more inclusive spaces.

As institutions of public trust, museums need to take a stance on the issue of privacy and surveillance. If we don’t engage with the issue now we may find ourselves talking about the concept of privacy as part of an exhibition, a cultural artifact of the distant past.

Notes


  1. Koven Smith, “Are We Giving Up Too Much?,” MUSEUM, no. 1 (January 2019): 12-15.
  2. Whitfield Diffie, “The Impact of a Secret Cryptographic Standard on Encryption, Privacy, Law Enforcement and Technology,” Transcript of Testimony before Congress, May 11, 1993, https://www.epic.org/crypto/clipper/diffie_testimony.html (accessed February 10, 2019).
  3. Amir Efrati, “‘Like’ Button Follows Web Users,” The Wall Street Journal, May 18, 2011, https://www.wsj.com/articles/SB10001424052748704281504576329441432995616 (accessed February 10, 2019).
  4. Kurt Wagner, “This is how Facebook collects data on you even if you don’t have an account,” recode, April 20, 2018, https://www.recode.net/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (accessed February 10, 2019).
  5. Noah Kulwin, “An Apology for the Internet - From the Architects Who Built It,” New York Magazine. April 13, 2018, http://nymag.com/intelligencer/2018/04/an-apology-for-the-internet-from-the-people-who-built-it.html (accessed February 10, 2019).
  6. Ibid.
  7. Sabina Miheji, Adrian Leguina, and John Downey, “Culture Is Digital: Cultural Participation, Diversity and the Digital Divide,” New Media & Society, January 20, 2019, doi:10.1177/1461444818822816.
  8. Ibid.
  9. Noah Kulwin, “An Apology for the Internet - From the Architects Who Built It.”
  10. Heather Kelly, “AI Is Hurting People of Color and the Poor. Experts Want to Fix That,” CNN Money, July 23, 2018, https://money.cnn.com/2018/07/23/technology/ai-bias-future/index.html (accessed February 10, 2019).
  11. Heather Kelly. “AI Is Hurting People of Color and the Poor. Experts Want to Fix That,” CNN Money, July 23, 2018, https://money.cnn.com/2018/07/23/technology/ai-bias-future/index.html (accessed February 10, 2019).

Bibliography