How do you make large scale harm visible on the individual level?

Teams that build security and privacy tools like Brave Browser, Tor Browser, Signal, Telegram, and others focus on usability and feature parity of these tools in an effort to more effectively acquire users from Google Chrome, iMessage, Google Hangouts, WhatsApp, and others. 

Do people fail to adopt these more secure and private tools because they aren’t as usable as what they’re already using, or because it requires too much effort to switch?

I mean, of course it’s both. You need to make the effort to switch, and in order to switch you need viable alternatives to switch to. And that’s where the usability and feature parity of Brave Browser and Signal compared with Google Chrome and WhatsApp come in. 

But if we’re living in a world where feature parity and usability are a foregone conclusion, and we are, then what? What needs to happen to drive a large-scale shift away from data-consuming and privacy-invading tools and toward those that don’t collect data and aggressively encrypt our messages? 

To me, that’s where it becomes clear that the amorphous effects of widespread data collection—though well-chronicled in blog posts, books, and shows like The Social Dilemma— don’t often lead to real change unless a personal threat is felt. 

Marginalized and surveilled communities adopt tools like Signal or FireChat in order to protect their privacy and security, because their privacy and security are actively under threat. For others, their privacy and security is still under threat, but indirectly. Lacking a single (or a series of) clear events that are tied to direct personal harm, people don’t often abandon a platform. 

If I don’t see how the use of using Google Chrome, YouTube, Facebook, Instagram, Twitter, and other sites and tools cause direct harm to me, I have little incentive to make a change, despite the evidence of aggregate harm on society—amplified societal divisions, active disinformation campaigns, and more. 

Essays that expose the “dark side” of social media and algorithms make an attempt to identify distinct personal harms caused by these systems. Essays like James Bridle’s essay on YouTube, Something is wrong on the internet (2017), or Adrian Chen’s essay about what social media content moderators experience, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed (2014) or Casey Newton’s about the same, The secret lives of Facebook moderators in America (2019), gain widespread attention for the problems they expose, but don’t necessarily lead to people abandoning the platforms, nor lead the platforms themselves to take action. 

These theorists and journalists are making a serious attempt to make large-scale harm caused by these platforms visible on an individual level, but nothing is changing. Is it the fault of the individual, or the platform?

Spoilers, it’s always “both”. And here we can draw an analogy to climate change too. As with climate change, the effects resulting from these platforms and companies are so amorphous, it’s possible to point to alternate explanations—for a time. Dramatically worsening wildfires in the Western United States are a result of poor fire policy, worsening tropical storms are a result of weaker wind patterns (or stronger ones? I don’t study wind). 

One could argue that perhaps climate change is the result of mechanization and industrialization in general, and it would be happening without the companies currently contributing to it. Perhaps the dark side of the internet is just the dark side of reality, and nothing worse than would exist without these platforms and companies contributing. 

The truth is, it’s both. We live in a “yes, and” world. Climate change is causing, contributing to, and intensifying the effects of wildfires and the strength and frequency of tropical storms and hurricanes. Platform algorithms are causing, contributing to, and intensifying the effects of misinformation campaigns and violence on social media and the internet. 

And much like companies that contributed to climate change knew what was happening, as reported in The Guardian: Shell and Exxon’s secret 1980s climate change warnings, Facebook Google and others know that their algorithms are actively contributing to societal harm—but the companies aren’t doing enough about it. 

So what’s next? 

  • Do we continue to attempt to make the individual feel the pain of the community in an effort to cause individual change? 
  • Do we use laws and policy to constrain the use of algorithms for specific purposes, in an effort to regulate the effects away?
  • Do we build alternate tools with the same functionality and take users away from the harm-causing tools? 
  • Do we use our power as laborers to strike against the harm caused by the tools that we build? 

With climate change (and too with data security and privacy), we’re already taking all of these approaches. What else might be out there? What else can we do to lead to change? 

Where Scanning the Internet Gets You

From awhile back, Brian Krebs talks to three researchers at U-M about their ZMap tool. An efficient and comprehensive way to scan the Internet, they’ve recently built a search engine called Censys that searches across their daily data collections from the ZMap scans.

From Krebs’ interview with the researchers (Zakir Durumeric, Eric Wustrow, and J. Alex Halderman):

“What we were able to find was by taking the data from these scans and actually doing vulnerability notifications to everybody, we were able to increase patching for the Heartbleed bug by 50 percent. So there was an interesting kind of surprise there, not what you learn from looking at the data, but in terms of what actions do you take from that analysis? And that’s something we’re incredibly interested in: Which is how can we spur progress within the community to improve security, whether that be through vulnerability notification, or helping with configurations.”

Using ZMap allows them to quickly collect this data (compared to other network scanners), but the researchers aren’t just scanning the Internet because they feel like it. They’re taking action based on the scan results—notifying people when their machines are vulnerable to the Heartbleed bug.
Beyond notification, they can take other steps:
“So, that’s the other thing that’s really exciting about this data. Notification is one thing, but the other is we’ve been building models that are predictive of organizational behavior. So, if you can watch, for example, how an organization runs their Web server, how they respond to certificate revocation, or how fast they patch — that actually tells you something about the security posture of the organization, and you can start to build models of risk profiles of those organizations. It moves away from this sort of patch-and-break or patch-and-pray game we’ve been playing. So, that’s the other thing we’ve been starting to see, which is the potential for being more proactive about security.”
Internet scan data can help us better understand organizational security posture and develop different models of risk profiles in organizations. With those risk profiles, improving an organization’s security posture could be a matter of identifying the inefficient elements and focusing on them. Security posture is culture as much as machines. While SIEMs can identify risk factors in your machines, models of organizational security posture can identify the risk factors in your culture.

Prescriptive Design and the Decline of Manuals

Instruction manuals, and instructions in general, are incredibly important. I could be biased, since part of my job involves writing instructions for systems, but really, they’re important!

As this look into the historical importance of manuals makes clear, manuals (and instructions) make accessible professions, tools, and devices to anyone that can read them (which, admittedly, could be a hurdle of its own):

“With no established guild system in place for many of these new professions (printer, navigator, and so on), readers could, with the help of a manual, circumvent years of apprenticeship and change the course of their lives, at least in theory.”

However, as the economy and labor system shifted, manuals did too:

“in the 1980s, the manual began to change. Instead of growing, it began to shrink and even disappear. Instead of mastery, it promised competence.”

And nowadays, manuals are very rarely separate from the devices or systems they seek to explain:

“the help we once sought from a manual is now mostly embedded into the apps we use every day. It could also be crowdsourced, with users contributing Q&As or uploading how-to videos to YouTube, or it could programmed into a weak artificial intelligence such as Siri or Cortana.”

Continue reading

Torture, Ownership, and Privacy

The Senate Intelligence Committee released hundreds of pages (soon available as a book) detailing acts of torture committed by the CIA.

Continue reading

Reading, Drones, and Georgie Washington

Americans are still reading books, Internet and all! Younger Americans are actually reading more than older generations, which could be partially due to the fact that with the rise of texting and social media, so much of our communication is text-based, so everyone is doing a lot more reading (and writing) in order to communicate with their friends. The original study is linked in that article and in this graph:

What are some other ways to get people to read books?

Well it helps a lot if your college library not only tells you the call numbers of the book, but it gives you precise directions to the location of the book, which is pretty awesome. Much more useful when navigating a giant library, like I have access to at the university I work at, as opposed to the smaller library at the university I actually attended.

Continue reading

Heartbleed, Borders, and Cookies

HEARTBLEED heartbleed my heart is bleeding about heartbleed….

How soon until someone writes a country ballad about heartbleed? Knowing the Internet, probably before all the currently vulnerable sites are patched. Researchers at University of Michigan previously produced a tool which was capable of scanning large swaths of the Internet at incredibly fast speeds. They took advantage of this tool to regularly scan the top 1 million sites on the Internet (as categorized by Alexa)(who is not a person) and determine what portion of the sites are vulnerable. Mashable, meanwhile, has compiled a list of the big websites that were vulnerable (but now are not). This bug is the latest and greatest of them….yet (as XKCD points out)

As the NYT points out, as the web gets larger it also gets less secure (and thus, harder to defend):

“If you fix one Internet security bug, you can be sure that attackers will just find another, potentially more dangerous one. “Over all, attackers have the competitive advantage,” said Jen Weedon, who works on the threat intelligence team at the security company Mandiant. “Defenders need to defend everything. All attackers need to find is one vulnerability.””

Continue reading