How do you make large scale harm visible on the individual level?

Teams that build security and privacy tools like Brave Browser, Tor Browser, Signal, Telegram, and others focus on usability and feature parity of these tools in an effort to more effectively acquire users from Google Chrome, iMessage, Google Hangouts, WhatsApp, and others. 

Do people fail to adopt these more secure and private tools because they aren’t as usable as what they’re already using, or because it requires too much effort to switch?

I mean, of course it’s both. You need to make the effort to switch, and in order to switch you need viable alternatives to switch to. And that’s where the usability and feature parity of Brave Browser and Signal compared with Google Chrome and WhatsApp come in. 

But if we’re living in a world where feature parity and usability are a foregone conclusion, and we are, then what? What needs to happen to drive a large-scale shift away from data-consuming and privacy-invading tools and toward those that don’t collect data and aggressively encrypt our messages? 

To me, that’s where it becomes clear that the amorphous effects of widespread data collection—though well-chronicled in blog posts, books, and shows like The Social Dilemma— don’t often lead to real change unless a personal threat is felt. 

Marginalized and surveilled communities adopt tools like Signal or FireChat in order to protect their privacy and security, because their privacy and security are actively under threat. For others, their privacy and security is still under threat, but indirectly. Lacking a single (or a series of) clear events that are tied to direct personal harm, people don’t often abandon a platform. 

If I don’t see how the use of using Google Chrome, YouTube, Facebook, Instagram, Twitter, and other sites and tools cause direct harm to me, I have little incentive to make a change, despite the evidence of aggregate harm on society—amplified societal divisions, active disinformation campaigns, and more. 

Essays that expose the “dark side” of social media and algorithms make an attempt to identify distinct personal harms caused by these systems. Essays like James Bridle’s essay on YouTube, Something is wrong on the internet (2017), or Adrian Chen’s essay about what social media content moderators experience, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed (2014) or Casey Newton’s about the same, The secret lives of Facebook moderators in America (2019), gain widespread attention for the problems they expose, but don’t necessarily lead to people abandoning the platforms, nor lead the platforms themselves to take action. 

These theorists and journalists are making a serious attempt to make large-scale harm caused by these platforms visible on an individual level, but nothing is changing. Is it the fault of the individual, or the platform?

Spoilers, it’s always “both”. And here we can draw an analogy to climate change too. As with climate change, the effects resulting from these platforms and companies are so amorphous, it’s possible to point to alternate explanations—for a time. Dramatically worsening wildfires in the Western United States are a result of poor fire policy, worsening tropical storms are a result of weaker wind patterns (or stronger ones? I don’t study wind). 

One could argue that perhaps climate change is the result of mechanization and industrialization in general, and it would be happening without the companies currently contributing to it. Perhaps the dark side of the internet is just the dark side of reality, and nothing worse than would exist without these platforms and companies contributing. 

The truth is, it’s both. We live in a “yes, and” world. Climate change is causing, contributing to, and intensifying the effects of wildfires and the strength and frequency of tropical storms and hurricanes. Platform algorithms are causing, contributing to, and intensifying the effects of misinformation campaigns and violence on social media and the internet. 

And much like companies that contributed to climate change knew what was happening, as reported in The Guardian: Shell and Exxon’s secret 1980s climate change warnings, Facebook Google and others know that their algorithms are actively contributing to societal harm—but the companies aren’t doing enough about it. 

So what’s next? 

  • Do we continue to attempt to make the individual feel the pain of the community in an effort to cause individual change? 
  • Do we use laws and policy to constrain the use of algorithms for specific purposes, in an effort to regulate the effects away?
  • Do we build alternate tools with the same functionality and take users away from the harm-causing tools? 
  • Do we use our power as laborers to strike against the harm caused by the tools that we build? 

With climate change (and too with data security and privacy), we’re already taking all of these approaches. What else might be out there? What else can we do to lead to change? 

Prescriptive Design and the Decline of Manuals

Instruction manuals, and instructions in general, are incredibly important. I could be biased, since part of my job involves writing instructions for systems, but really, they’re important!

As this look into the historical importance of manuals makes clear, manuals (and instructions) make accessible professions, tools, and devices to anyone that can read them (which, admittedly, could be a hurdle of its own):

“With no established guild system in place for many of these new professions (printer, navigator, and so on), readers could, with the help of a manual, circumvent years of apprenticeship and change the course of their lives, at least in theory.”

However, as the economy and labor system shifted, manuals did too:

“in the 1980s, the manual began to change. Instead of growing, it began to shrink and even disappear. Instead of mastery, it promised competence.”

And nowadays, manuals are very rarely separate from the devices or systems they seek to explain:

“the help we once sought from a manual is now mostly embedded into the apps we use every day. It could also be crowdsourced, with users contributing Q&As or uploading how-to videos to YouTube, or it could programmed into a weak artificial intelligence such as Siri or Cortana.”

Continue reading

Algorithms, Confidence, and Infrastructure

Every so often the Oxford English Dictionary adds new words. It adds them to its online dictionary with far more frequency than its physical tome, given that a physical dictionary is quite a bit more difficult to update. It released a list of new words yesterday, and while a few are new words entirely (bikeable) others are new definitions of familiar words. The “tumblr definition” of ship is recognized (and boy is the tumblr community excited about it) and a definition of thing that accounts for the phrase “is that a thing?”

a list of web domains that begin with the word important, including their IP addresses

Daniel Temkin put together an Internet Directory with a scrolling and searchable list of all registered domains with a top level domain name ending in .com

Ted Striphas was interviewed about the effects of algorithms (such as the ones that define the order of google search results, or what shows up in your facebook newsfeed) on culture. As he puts it, “The issue may come down to how comfortable people are with these systems drilling down into our daily lives, and even becoming extensions of our bodies.”

Continue reading

The Evolution of Music Listening

Pitchfork recently published a great longform essay on music streaming. It covered the past, history, and present of music streaming, and brought up a lot of great points. These are my reactions.

The piece discussed how “the “omnivore” is the new model for the music connoisseur, and one’s diversity of listening across the high/low spectrum is now seen as the social signal of refined taste.” It would be interesting to study how this omnivority splits across genres, age groups, and affinities. I find myself personally falling into omnivore status, as I am never able to properly define my music taste according to genre, and my musical affinities shift daily, weekly, monthly, with common themes.

Also discussed is the cost of music, whether it be licensing, royalties, or record label advances. Having to deal with the cost of music is a difficult matter. I wonder if I would have been such a voracious consumer of music if I hadn’t grown up with so many free options with the library, the radio, and later, music blogs. Now that I’m older, I make the effort to purchase music when I feel the artist deserves it, but as I distance myself (incidentally, really) from storing music on my computer, that effort becomes less important to expend.

Continue reading