How do you make large scale harm visible on the individual level?

Teams that build security and privacy tools like Brave Browser, Tor Browser, Signal, Telegram, and others focus on usability and feature parity of these tools in an effort to more effectively acquire users from Google Chrome, iMessage, Google Hangouts, WhatsApp, and others. 

Do people fail to adopt these more secure and private tools because they aren’t as usable as what they’re already using, or because it requires too much effort to switch?

I mean, of course it’s both. You need to make the effort to switch, and in order to switch you need viable alternatives to switch to. And that’s where the usability and feature parity of Brave Browser and Signal compared with Google Chrome and WhatsApp come in. 

But if we’re living in a world where feature parity and usability are a foregone conclusion, and we are, then what? What needs to happen to drive a large-scale shift away from data-consuming and privacy-invading tools and toward those that don’t collect data and aggressively encrypt our messages? 

To me, that’s where it becomes clear that the amorphous effects of widespread data collection—though well-chronicled in blog posts, books, and shows like The Social Dilemma— don’t often lead to real change unless a personal threat is felt. 

Marginalized and surveilled communities adopt tools like Signal or FireChat in order to protect their privacy and security, because their privacy and security are actively under threat. For others, their privacy and security is still under threat, but indirectly. Lacking a single (or a series of) clear events that are tied to direct personal harm, people don’t often abandon a platform. 

If I don’t see how the use of using Google Chrome, YouTube, Facebook, Instagram, Twitter, and other sites and tools cause direct harm to me, I have little incentive to make a change, despite the evidence of aggregate harm on society—amplified societal divisions, active disinformation campaigns, and more. 

Essays that expose the “dark side” of social media and algorithms make an attempt to identify distinct personal harms caused by these systems. Essays like James Bridle’s essay on YouTube, Something is wrong on the internet (2017), or Adrian Chen’s essay about what social media content moderators experience, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed (2014) or Casey Newton’s about the same, The secret lives of Facebook moderators in America (2019), gain widespread attention for the problems they expose, but don’t necessarily lead to people abandoning the platforms, nor lead the platforms themselves to take action. 

These theorists and journalists are making a serious attempt to make large-scale harm caused by these platforms visible on an individual level, but nothing is changing. Is it the fault of the individual, or the platform?

Spoilers, it’s always “both”. And here we can draw an analogy to climate change too. As with climate change, the effects resulting from these platforms and companies are so amorphous, it’s possible to point to alternate explanations—for a time. Dramatically worsening wildfires in the Western United States are a result of poor fire policy, worsening tropical storms are a result of weaker wind patterns (or stronger ones? I don’t study wind). 

One could argue that perhaps climate change is the result of mechanization and industrialization in general, and it would be happening without the companies currently contributing to it. Perhaps the dark side of the internet is just the dark side of reality, and nothing worse than would exist without these platforms and companies contributing. 

The truth is, it’s both. We live in a “yes, and” world. Climate change is causing, contributing to, and intensifying the effects of wildfires and the strength and frequency of tropical storms and hurricanes. Platform algorithms are causing, contributing to, and intensifying the effects of misinformation campaigns and violence on social media and the internet. 

And much like companies that contributed to climate change knew what was happening, as reported in The Guardian: Shell and Exxon’s secret 1980s climate change warnings, Facebook Google and others know that their algorithms are actively contributing to societal harm—but the companies aren’t doing enough about it. 

So what’s next? 

  • Do we continue to attempt to make the individual feel the pain of the community in an effort to cause individual change? 
  • Do we use laws and policy to constrain the use of algorithms for specific purposes, in an effort to regulate the effects away?
  • Do we build alternate tools with the same functionality and take users away from the harm-causing tools? 
  • Do we use our power as laborers to strike against the harm caused by the tools that we build? 

With climate change (and too with data security and privacy), we’re already taking all of these approaches. What else might be out there? What else can we do to lead to change? 

Repersonalizing Digital Communications: Against Standardizing and Interfering Mediations

Back in 2013 I wrote a blog post reacting to Cristina Vanko’s project to handwrite her text messages for one week. At the time, I focused on how Cristina introduced slowness into a digital communication that often operates as a conversation due to the immediacy and frequency of responses. Since 2013, texting has grown more popular and instant messaging has woven its way into our work environments as well. Reinvoking that slowness stays relevant, but careful notification settings can help recapture it as well. 

What I want to focus on is the way that her project repersonalizes the digital medium of communication, adding her handwriting and therefore more of her personality into the messages that she sends. I thought of this project again while watching a talk from Jonathan Zong for the Before and Beyond Typography Online Conference. In his talk, he points out that “writing is a form of identity representation”, with handwriting being “highly individualized and expressive”, while “in contrast, digital writing makes everyone’s writing look the same. People’s communications are filtered through the standardized letterforms of a font.” 

His project that he discusses in part of that talk, Biometric Sans, “elongates letterforms in response to the typing speed of the individual”, thus providing another way to reembody personality into digitally-mediated communications. He describes the font as “a gesture toward the reembodiment of typography, the reintroduction of the hand in digital writing.” It’s an explicit repersonalization of a digitally-mediated communication, in much the same way Cristina Vanko chose to handwrite her text messages to do the same. Both projects seek to repersonalize, and thereby rehumanize, the somewhat coldly standardized digital communication formats that we rely on. 

Without resorting to larger projects, we find other ways to repersonalize our digital communications: sharing stickers (I’m rather fond of Rejoinders), crafting new expressions (lol) and words, and even sending voice responses (at times accidentally) in text messages. In this way we can poke at the boundaries of the digital communication methods sanitized by standardized fonts for all users.

While Jonathan stayed rather focused on the typography mediation of digital communication due to the topic of the conference, I want to expand this notion of repersonalizing the digital communication methods. Fonts are not the only mechanism by which digital communications can be mediated and standardized—the tools that we use to create the text displayed by the fonts do just as much (if not more). 

The tools that mediate and standardize our text in other ways are, of course, automatic correction, predictive text, and the software keyboards themselves.

Apple is frustratingly subtle about automatic correction (autocorrect), oftentimes changing a perfectly legitimate word that you’ve typed into a word with a completely different meaning. It’s likely that autocorrect is attempting to “accelerate” your communications by guessing what you’re trying to type. This guess, mediating your input to alter the output, often interferes with your desired meaning. When this interfering mediation fails (which is often), you’re instead slowed down, forced to identify that your intended input has been unintentionally transformed, fix it, perhaps fix it again, and only then send your message.

Google, meanwhile, more often preemptively mediates your text. Predictive text in Google Mail “helps” you by suggesting commonly-typed words or responses.

Screenshot of Google Mail draft, with the text Here are some suggestions about what I might be typing next.  Do you want to go to the store? Maybe to the movies? What about to the mall?  What do you listen to? Sofi Tukker? What other DJs do you have? Where "have?" is a predictive suggestion and not actually typed.

This is another form of interference (in my mind), distracting you from what you’re actually trying to communicate and instead inserting you into a conflict with the software, fighting a standardized communication suggestion while you seek to express your point (and your personality) with a clear communication. Often, it can be distractingly bland or comical.

Screenshot of google mail smart responses, showing one that says "Thank you, I will do that." another that says "thank you!" and a third that says "Will do, thank you!" In Google Mail, this focus on standardized predictive responses also further perpetuates the notion of email as a “task to be completed” rather than an opportunity to interact, communicate, or share something of yourself with someone else. 

Software keyboards themselves also serve to mediate and effectively standardize digital communications. For me personally, I dislike software keyboards because I’m unable to touchtype on them (Frustrated, I tweeted about this in January). Lacking any hardware feedback or orientation, I frequently have to stare at the keyboard while I’m typing. I’m less able to focus on what I’m trying to say because I’m busy focusing on how to literally type it. This forced slowness, introducing a max speed at which you can communicate your thoughts, effectively forces you to rely on software-enabled shortcuts such as autocorrect, predictive text, or actual programmed shortcuts (such as replacing “omw” with “On my way!”), rather than being able to write or type at the speed of your thoughts (or close to it). Because of this limitation, I often choose to write out more abstract considerations or ideas longhand, or reluctantly open my computer, so that I have the privilege of a direct input-to-output translation without any or extensive software mediation. 

In a talk last June at the SF Public Library, Tom Mullaney discussed the mediation of software keyboards in depth, pointing out that software keyboards (or IMEs as he referred to them) do not serve as mechanical interpreters of what we type, but rather use input methods to transcribe text, and that those input methods can adapt to be more efficient. He used the term “hypography” to talk about the practice of writing when your input does not directly match the output. For example, when you use a programmed shortcut like omw, but also when you seek to type a character that isn’t represented on a key, such as ö, or if you’re typing in a language that uses a non-latin alphabet, a specific sequence of keystrokes to represent a fully-formed character in written text. Your input maps to an output, rather than the output matching the input. 

These inputs are often standardized, allowing you to learn the shortcuts over time and serving the purpose of accelerating your communications, but in the case of autocorrect or predictive text, they’re frequently suffering from new iterations—new words or phrases that interferingly mediate and change a slip up into a skip up, encourage you to respond to an email with a bland “Great, thanks!” or attempt to anticipate the entire rest of your sentence after you’ve only written a few words. Because I also have a German keyboard configured, my predictive text will occasionally “correct” an English typo into a German word, or overcapitalize generic English nouns by mistakenly applying German language rules. 

All of these interfering and distracting mediations that accelerate and decelerate our digital communications, alongside our ongoing efforts to repersonalize those communications, has me wondering: What do we lose when our digital communications are accelerated by expectations of instantaneous responses? What do we lose when they’re decelerated by interfering mediations of autocorrect? What do we lose when our communications are standardized by fonts, predictive text, and suggested responses?

Problems with Indexing Datasets like Web Pages

Google has created a dataset search for researchers or the average person looking for datasets. On the one hand, this is a cool idea. Datasets are hard to find in cases, and this ostensibly makes the datasets and accompanying research easier to find.
In my opinion this dataset search is problematic for two main reasons.

1. Positioning Google as a one-stop-shop for research is risky.

There’s consistent evidence that many people (especially college students who don’t work with their library) start and end their research with Google, rather than using scholarly databases, limiting the potential quality of their research. (There’s also something to be said here about the limiting of access to quality research behind exploitative and exclusionary paywalls, but that’s for another discussion).
Google’s business goal of being the first and last stop for information hunts makes sense for them as a company. But such a goal doesn’t necessarily improve academic research, or the knowledge that people derive based on information returned from search results.

2. Datasets without datasheets easily lead to bias.

The dataset search is clearly focused on indexing and making more available as many datasets as possible. The cost of that is continuing sloppy data analysis and research due to the lack of standardized Datasheets for Datasets (for example) that fully expose the contents and limitations of datasets.
The existing information about these datasets is constructed based on the schema defined by the dataset author, or perhaps more specifically, the site hosting the dataset. It’s encouraging that datasets have dates associated with them, but I’m curious where the description for the datasets are coming from.
Only the description and the name fields for the dataset are required before a dataset appears in the search. As such, the dataset search has limitations. Is the description for a given dataset any higher quality than the Knowledge Panels that show up in some Google search results? How can we as users independently validate the accuracy of the dataset schema information?
The quality of and details provided in the description field vary widely across various datasets (I did a cursory scan of datasets resulting from a keyword search for “cheese”) indicating that having a plain text required field doesn’t do much to assure quality and valuable information.
When datasets are easier to find, that can lead to better data insights for data analysts. However, it can just as easily lead to off-base analyses if someone misuses data that they found based on a keyword search, either intentionally or, more likely, because they don’t fully understand the limitations of a dataset.
Some vital limitations to understand when selecting one for use in data analysis are things like:
  • What does the data cover?
  • Who collected the data?
  • For what purpose was the data collected?
  • What features exist in the data?
  • Which fields were collected and which were derived?
  • If fields were derived, how were they derived?
  • What assumptions were made when collecting the data?

Without these valuable limitations being made as visible as the datasets themselves, I struggle to feel overly encouraged by this dataset search in its current form.

Ultimately, making information more easily accessible while removing or obscuring indicators that can help researchers assess the quality of the information is risky and creates new burdens for researchers.

Advertising Alternatives: It Pays to Be a Google Contributor

Earlier this week I got an email from Google.

My email invitation to join Google Contributor

One of my principles is to pay for things that I support. I can afford it, and things on the web are relatively cheap. Subscribing to ThinkUp, Pocket Premium, Feedly Pro, each cost about the same as a new pair of shoes, or a nice pair of jeans. To me, that’s a justifiable cost, so I pay it to keep the things I use and love alive.

Continue reading

Reading, Drones, and Georgie Washington

Americans are still reading books, Internet and all! Younger Americans are actually reading more than older generations, which could be partially due to the fact that with the rise of texting and social media, so much of our communication is text-based, so everyone is doing a lot more reading (and writing) in order to communicate with their friends. The original study is linked in that article and in this graph:

What are some other ways to get people to read books?

Well it helps a lot if your college library not only tells you the call numbers of the book, but it gives you precise directions to the location of the book, which is pretty awesome. Much more useful when navigating a giant library, like I have access to at the university I work at, as opposed to the smaller library at the university I actually attended.

Continue reading

Algorithms, Confidence, and Infrastructure

Every so often the Oxford English Dictionary adds new words. It adds them to its online dictionary with far more frequency than its physical tome, given that a physical dictionary is quite a bit more difficult to update. It released a list of new words yesterday, and while a few are new words entirely (bikeable) others are new definitions of familiar words. The “tumblr definition” of ship is recognized (and boy is the tumblr community excited about it) and a definition of thing that accounts for the phrase “is that a thing?”

a list of web domains that begin with the word important, including their IP addresses

Daniel Temkin put together an Internet Directory with a scrolling and searchable list of all registered domains with a top level domain name ending in .com

Ted Striphas was interviewed about the effects of algorithms (such as the ones that define the order of google search results, or what shows up in your facebook newsfeed) on culture. As he puts it, “The issue may come down to how comfortable people are with these systems drilling down into our daily lives, and even becoming extensions of our bodies.”

Continue reading

Masculinity, AIM, Ads, and Cops

Here’s what was important this week…

I treated myself to ice cream last night (from the freezer, not a lonely ice cream shop date with myself) and it was delicious. While I gained weight from starting an office job after college, I still have the privilege of avoiding most body policing placed on women.

However, men suffer their own share of body policing. In Hollywood, this manifests itself as an obsession with fit bodies, and fitness. Mens Journal examines the issue, speaking mostly to trainers and talking about the pressure for actors to get “fit” in order to land coveted roles. It’s so important to the industry that:

“There are dozens of hormone-replacement clinics in and around Hollywood, and their business is booming. But there are significant risks: Hormone therapy accelerates all cell growth, whether healthy or malignant, and can encourage existing cancers, especially prostate cancers, to metastasize at terrifying rates. Testosterone supplements can lower sperm counts. For many, the risk is worth it.”

Fitness is just one aspect of a narrow set of masculinity standards imposed on men. For many men, high school is one of the more painful places that these standards are enforced. Well-documented in this great book by sociologist C.J. Pascoe, an essay in The Walrus gets to the heart of many of the standards. A new sex ed program in some Canadian schools works on teaching these high school boys not only aspects about sex that are often glossed over in traditional sex ed courses, it also focuses on relationships, gender identity and expression, and explores these things in a safe space. Importantly,

“Teaching young men to trust, communicate, negotiate, and empathize does not undermine or threaten their manliness. It expands their humanity. It reclaims men’s possibilities.”

Something else that helps men reclaim their possibilities is by supporting women, becoming advocates for them in the workplace, being feminists… Shanley, a writer on diversity in tech, wrote an essay about what men can do to help women if they are in a position of power (in her case, speaking directly to white men in tech). It’s a bit profanity-laden and not completely generalizable, but makes some great points.

Continue reading

A Self-Driving Car “Revolution”?

The potential benefits and issues of self-driving cars have been addressed by many magazines, from The Economist and The Atlantic, to Business Insider and Forbes; and more  recently acknowledged by highway safety authorities in the USA. A hot-button issue as of late,  using autonomous vehicular control to reduce traffic fatalities and injuries is an ideal that should be encouraged, but it can’t be achieved without addressing a variety of concerns. Threats of generational trends, liability, security, and class (and cost) issues could doom a future of fully autonomous vehicle domination before it begins.

Naturally, to evaluate the future of this technology, we must first understand how self-driving cars work. Two notable elements of operating a self-driving car are the abundance of sensors involved and the integral role of programming the “right” way to drive. As quoted in the article:

Sometimes, however, the car has to be more “aggressive.” When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention. Without programming that kind of behavior, Urmson said, it would be impossible for the robot car to drive in the real world.

Continue reading

A Beginning

My boss was discussing the differences of Microsoft, Google, and Apple today when it comes to utility for business. While Microsoft tends to be somewhat derided for people from my generation (the sometime-scorned Millenials) for their bulky software packages and security-hole-ridden Internet Explorer browser, they are an industry standard. Why? They make static products that don’t change much. Not very innovative, but exactly what a business needs. Businesses create business processes that hinge on these very programs and the staticness of those programs, and their worlds are thrown out of whack when they change drastically.

My workplace is in the process of transitioning to Google Mail, and with that has come a lot of negative feedback from users. Google and Apple share a common characteristic–making changes that benefit them that they paternalistically decide will benefit their users. However, when their users attempt to build processes based on, for example, the structure of the compose window and the available fields when composing a message, and Google changes all of that because they wanted to, our users are thrown off kilter. Apple is a business standard, and falling out of favor with some, for design-intensive professions like photography and graphic design. They’re falling out of favor with some for their emphasis on innovation–removing previously standard computing elements like optical drives in favor of slimmer design. Some changes they’ve made reduce the company’s ability to be a trustworthy ally to design professionals.

Google currently offers no active support for users, providing a feedback form and support pages and forums for users, but no contact information beyond that. They also consistently maintain the paternalistic innovation-for-the-user design motivation–at times disregarding the business needs of their users in Google Apps for Business and Google Apps for Education. It will be interesting to see if Google continues to innovate as it does currently, or if an emphasis on the business needs of larger consumers will inspire it to make changes.