How do you make large scale harm visible on the individual level?

Teams that build security and privacy tools like Brave Browser, Tor Browser, Signal, Telegram, and others focus on usability and feature parity of these tools in an effort to more effectively acquire users from Google Chrome, iMessage, Google Hangouts, WhatsApp, and others. 

Do people fail to adopt these more secure and private tools because they aren’t as usable as what they’re already using, or because it requires too much effort to switch?

I mean, of course it’s both. You need to make the effort to switch, and in order to switch you need viable alternatives to switch to. And that’s where the usability and feature parity of Brave Browser and Signal compared with Google Chrome and WhatsApp come in. 

But if we’re living in a world where feature parity and usability are a foregone conclusion, and we are, then what? What needs to happen to drive a large-scale shift away from data-consuming and privacy-invading tools and toward those that don’t collect data and aggressively encrypt our messages? 

To me, that’s where it becomes clear that the amorphous effects of widespread data collection—though well-chronicled in blog posts, books, and shows like The Social Dilemma— don’t often lead to real change unless a personal threat is felt. 

Marginalized and surveilled communities adopt tools like Signal or FireChat in order to protect their privacy and security, because their privacy and security are actively under threat. For others, their privacy and security is still under threat, but indirectly. Lacking a single (or a series of) clear events that are tied to direct personal harm, people don’t often abandon a platform. 

If I don’t see how the use of using Google Chrome, YouTube, Facebook, Instagram, Twitter, and other sites and tools cause direct harm to me, I have little incentive to make a change, despite the evidence of aggregate harm on society—amplified societal divisions, active disinformation campaigns, and more. 

Essays that expose the “dark side” of social media and algorithms make an attempt to identify distinct personal harms caused by these systems. Essays like James Bridle’s essay on YouTube, Something is wrong on the internet (2017), or Adrian Chen’s essay about what social media content moderators experience, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed (2014) or Casey Newton’s about the same, The secret lives of Facebook moderators in America (2019), gain widespread attention for the problems they expose, but don’t necessarily lead to people abandoning the platforms, nor lead the platforms themselves to take action. 

These theorists and journalists are making a serious attempt to make large-scale harm caused by these platforms visible on an individual level, but nothing is changing. Is it the fault of the individual, or the platform?

Spoilers, it’s always “both”. And here we can draw an analogy to climate change too. As with climate change, the effects resulting from these platforms and companies are so amorphous, it’s possible to point to alternate explanations—for a time. Dramatically worsening wildfires in the Western United States are a result of poor fire policy, worsening tropical storms are a result of weaker wind patterns (or stronger ones? I don’t study wind). 

One could argue that perhaps climate change is the result of mechanization and industrialization in general, and it would be happening without the companies currently contributing to it. Perhaps the dark side of the internet is just the dark side of reality, and nothing worse than would exist without these platforms and companies contributing. 

The truth is, it’s both. We live in a “yes, and” world. Climate change is causing, contributing to, and intensifying the effects of wildfires and the strength and frequency of tropical storms and hurricanes. Platform algorithms are causing, contributing to, and intensifying the effects of misinformation campaigns and violence on social media and the internet. 

And much like companies that contributed to climate change knew what was happening, as reported in The Guardian: Shell and Exxon’s secret 1980s climate change warnings, Facebook Google and others know that their algorithms are actively contributing to societal harm—but the companies aren’t doing enough about it. 

So what’s next? 

  • Do we continue to attempt to make the individual feel the pain of the community in an effort to cause individual change? 
  • Do we use laws and policy to constrain the use of algorithms for specific purposes, in an effort to regulate the effects away?
  • Do we build alternate tools with the same functionality and take users away from the harm-causing tools? 
  • Do we use our power as laborers to strike against the harm caused by the tools that we build? 

With climate change (and too with data security and privacy), we’re already taking all of these approaches. What else might be out there? What else can we do to lead to change? 

Torture, Ownership, and Privacy

The Senate Intelligence Committee released hundreds of pages (soon available as a book) detailing acts of torture committed by the CIA.

Continue reading

Identity on the Internet

Anonymity is valuable to the structure of the Internet, but as the identity of a person becomes fluid, the reputations and identifiability of someone’s online presence becomes increasingly valuable. While jobs rely on user-submitted references, as do academic applications, many also turn to your social media presence or to your search results to gauge reputation. Privacy by obscurity, as records are digitized and indexed, is no longer as viable. But, there is no consistent form of identification across the web. Each service relies on its own username as identifier, with character limits abound, and your ability to hold the same username across services relies on both the uniqueness of your username as well as the date you joined the online service. But are usernames outdated? A self-selected identifier, varying from service to service and format to format? As Mat Honan puts it, ““One of the best things about the online world is how it lets us be whoever we want to be. We shouldn’t have to sacrifice that just because someone else got there first.”

The advantage of a username is that, at least within a service, it “refers unambiguously to a particular person”. That works fine if you know the username of the person, but often you may only know their name. Luckily, with services like Facebook, a person’s unique identifier is their name, provided they haven’t pseudonymized it. Once you have connected with that person, you expect (within the relevant online service) when you type in their name, you will be returned with precisely the person you were expecting to find. The difficulty with this system is finding out the username of another person, and confirming that the person with their name online is really the person you’re looking for.

Continue reading

Bitcoin, Security, and Photography

nananananananananananana BITCOINNNN

I had to talk about it eventually, and Thursday’s news was a good impetus. Newsweek had a big “scoop” potentially unmasking the founder of Bitcoin. The magazine saved this story for the cover of their return-to-print issue. The story features stalking masquerading as investigative journalism, as the author tracked down this man through national records, then tracked his interests to a model train forum, where she emailed him purporting to be interested in trains, then began asking about Bitcoin (at which point he stopped responding).
Then she tracked down his home and family members, and interviewed them extensively about the man and itcoin. She finally paid him a visit at his home, and instead of answering the door he called the cops. This surprised her. Read the article in full, if you’d like to know more about the lengths some people will go to find people who don’t want to be found (and who haven’t done anything wrong).(After some sushi and a car chase the man himself claims he is not involved with Bitcoin).

Continue reading

Some notes on surveillance and national security

Jill Lepore, in her excellent examination of the current state of surveillance that we languish in, made this remark in reference to Jeremy Bentham’s essay  On Publicity:

““Without publicity, no good is permanent: under the auspices of publicity, no evil can continue.” He [Bentham] urged, for instance, that members of the public be allowed into the legislature, and that the debates held there be published. The principal defense for keeping the proceedings of government private—the position advocated by those Bentham called “the partisans of mystery”—was that the people are too ignorant to judge their rulers.”

To paraphrase, according to Bentham, it wasn’t previously that that citizens should know what their government was doing because they wouldn’t be smart enough to understand and evaluate the decisions made by their leaders.

Continue reading

A Self-Driving Car “Revolution”?

The potential benefits and issues of self-driving cars have been addressed by many magazines, from The Economist and The Atlantic, to Business Insider and Forbes; and more  recently acknowledged by highway safety authorities in the USA. A hot-button issue as of late,  using autonomous vehicular control to reduce traffic fatalities and injuries is an ideal that should be encouraged, but it can’t be achieved without addressing a variety of concerns. Threats of generational trends, liability, security, and class (and cost) issues could doom a future of fully autonomous vehicle domination before it begins.

Naturally, to evaluate the future of this technology, we must first understand how self-driving cars work. Two notable elements of operating a self-driving car are the abundance of sensors involved and the integral role of programming the “right” way to drive. As quoted in the article:

Sometimes, however, the car has to be more “aggressive.” When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention. Without programming that kind of behavior, Urmson said, it would be impossible for the robot car to drive in the real world.

Continue reading