How do you make large scale harm visible on the individual level?

Teams that build security and privacy tools like Brave Browser, Tor Browser, Signal, Telegram, and others focus on usability and feature parity of these tools in an effort to more effectively acquire users from Google Chrome, iMessage, Google Hangouts, WhatsApp, and others. 

Do people fail to adopt these more secure and private tools because they aren’t as usable as what they’re already using, or because it requires too much effort to switch?

I mean, of course it’s both. You need to make the effort to switch, and in order to switch you need viable alternatives to switch to. And that’s where the usability and feature parity of Brave Browser and Signal compared with Google Chrome and WhatsApp come in. 

But if we’re living in a world where feature parity and usability are a foregone conclusion, and we are, then what? What needs to happen to drive a large-scale shift away from data-consuming and privacy-invading tools and toward those that don’t collect data and aggressively encrypt our messages? 

To me, that’s where it becomes clear that the amorphous effects of widespread data collection—though well-chronicled in blog posts, books, and shows like The Social Dilemma— don’t often lead to real change unless a personal threat is felt. 

Marginalized and surveilled communities adopt tools like Signal or FireChat in order to protect their privacy and security, because their privacy and security are actively under threat. For others, their privacy and security is still under threat, but indirectly. Lacking a single (or a series of) clear events that are tied to direct personal harm, people don’t often abandon a platform. 

If I don’t see how the use of using Google Chrome, YouTube, Facebook, Instagram, Twitter, and other sites and tools cause direct harm to me, I have little incentive to make a change, despite the evidence of aggregate harm on society—amplified societal divisions, active disinformation campaigns, and more. 

Essays that expose the “dark side” of social media and algorithms make an attempt to identify distinct personal harms caused by these systems. Essays like James Bridle’s essay on YouTube, Something is wrong on the internet (2017), or Adrian Chen’s essay about what social media content moderators experience, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed (2014) or Casey Newton’s about the same, The secret lives of Facebook moderators in America (2019), gain widespread attention for the problems they expose, but don’t necessarily lead to people abandoning the platforms, nor lead the platforms themselves to take action. 

These theorists and journalists are making a serious attempt to make large-scale harm caused by these platforms visible on an individual level, but nothing is changing. Is it the fault of the individual, or the platform?

Spoilers, it’s always “both”. And here we can draw an analogy to climate change too. As with climate change, the effects resulting from these platforms and companies are so amorphous, it’s possible to point to alternate explanations—for a time. Dramatically worsening wildfires in the Western United States are a result of poor fire policy, worsening tropical storms are a result of weaker wind patterns (or stronger ones? I don’t study wind). 

One could argue that perhaps climate change is the result of mechanization and industrialization in general, and it would be happening without the companies currently contributing to it. Perhaps the dark side of the internet is just the dark side of reality, and nothing worse than would exist without these platforms and companies contributing. 

The truth is, it’s both. We live in a “yes, and” world. Climate change is causing, contributing to, and intensifying the effects of wildfires and the strength and frequency of tropical storms and hurricanes. Platform algorithms are causing, contributing to, and intensifying the effects of misinformation campaigns and violence on social media and the internet. 

And much like companies that contributed to climate change knew what was happening, as reported in The Guardian: Shell and Exxon’s secret 1980s climate change warnings, Facebook Google and others know that their algorithms are actively contributing to societal harm—but the companies aren’t doing enough about it. 

So what’s next? 

  • Do we continue to attempt to make the individual feel the pain of the community in an effort to cause individual change? 
  • Do we use laws and policy to constrain the use of algorithms for specific purposes, in an effort to regulate the effects away?
  • Do we build alternate tools with the same functionality and take users away from the harm-causing tools? 
  • Do we use our power as laborers to strike against the harm caused by the tools that we build? 

With climate change (and too with data security and privacy), we’re already taking all of these approaches. What else might be out there? What else can we do to lead to change? 

Engaging with San Francisco history as a newcomer

I moved to San Francisco from the Midwest a few years ago, and I’d been missing a strong sense of history since then. I’ve been to a few events in an attempt to learn more about my new home, such as a Dolores Park history day, or a history-relevant event in the Mission as part of Litcrawl, but I struggled to absorb a history for the city that went beyond “gold rush, earthquake, tech boom, bust, boom”.

But last week I was at the library and saw an event that was being held in conjunction with the display of the Bay Model throughout SF public libraries and Take Part SF, called Vanished Waters. It was about Mission Bay history so I skipped my regular workout to attend. It was well worth it!

Vanished Waters is also a book, so that was the loose structure that the talk revolved around, and was given by Chris Carlsson, an expert on San Francisco history and an engaging speaker. He co-founded Shaping SF, helping to maintain a digital archive of the city’s past.

My favorite fascinating facts that I learned at the talk were that in 1852, Market street ended where 3rd street is in an 80 ft tall sand dune.

SoMA was really hilly and marshy, but then some dude with a steam shovel was like “sup let me move that sand for you” and also “sup let me help you fill in this lot that you bought that is literally just water”. That’s my paraphrasing, but the actual details are to be read on Found SF.

The whole idea to fill the San Francisco Bay in is hard to imagine now because it’s not polluted, but if it was a stinky polluted putrid mess full of garbage it’s easier to imagine it being a good idea. (The whole idea to fill in the Bay is why the SF Bay Model was built).

However, my favorite part of the talk was when Carlsson discussed using this history to inform our present and future decisions. He pointed out that there is a lot of rhetoric in San Francisco about how to build more housing to manage the growth of the city, and what kinds of development is best suited to accommodating all of the people that move here.

However, there’s not much rhetoric (if any) about staging a managed retreat from climate change. San Francisco is a coastal city, built on top of marshland, sand dunes, or literal land fill. What happens when the sea level begins to rise, or more volatile weather patterns cause bigger storms and potential flooding?

I realized after this talk that some city dwellers love to judge southeastern coastal city residents that build or rebuild homes in the path of hurricanes or immediate climate change threats, and yet, New York City and San Francisco are both at high risk from sea level rise.

That’s not to mention the earthquake risk in San Francisco. Our current development plans are not necessarily smart, as this article in the New York Times points out.

It’s fascinating to explore what the city used to look like less than 200 years ago, and imagine what it might look like in 2057 in the face of climate change. I lost nearly an hour clicking around the maps on David Rumsey’s website (recommended by Carlsson).

Here’s a map of the 1857 coast, overlaid on modern San Francisco.

This map in 1869 of land lots makes it clear just how much of the land that was sold during that period wasn’t actually land. This essay in Collector’s Weekly covers how that land speculation happened and how it shapes modern real estate in the city.

This exploration all happened because of this talk and the display of the 1938 3D model of San Francisco in the San Francisco Public Libraries. If you want to help find the model a permanent home for display in the city, sign this petition. Just imagine making this map overlay of what San Francisco looked like in 1938 into a tactile experience.

The model is still on display in SF Public Library branches throughout the city, and you can stay engaged in city history through the San Francisco Department of Memory, the California Historical Society, Shaping SF, and Found SF.

Noise, Medicine, and Music

Here’s what was important this week…

More than you probably ever wanted to know about refrigerators and refrigeration:

“Refrigeration is the invisible backbone on which the world’s food supply depends — and given our climate-changed forecast of more extreme weather events, it may yet prove to be its Achilles’ heel.”

Oh how I wish this had come true:

 “All mechanical fridges work by controlling the vaporisation and condensation of a liquid called a refrigerant. Most fridges today do this control with a special electric-power pump called a compressor, but there’s also the technique of absorption, which is kicked off by a gas-fuelled flame. The fridge’s hum wasn’t inevitable.” 

I have somewhat of an aversion to background humming noises, like that of a refrigerator, central air system, fluorescent lights, or washing machines.

Continue reading