Repersonalizing Digital Communications: Against Standardizing and Interfering Mediations

Back in 2013 I wrote a blog post reacting to Cristina Vanko’s project to handwrite her text messages for one week. At the time, I focused on how Cristina introduced slowness into a digital communication that often operates as a conversation due to the immediacy and frequency of responses. Since 2013, texting has grown more popular and instant messaging has woven its way into our work environments as well. Reinvoking that slowness stays relevant, but careful notification settings can help recapture it as well. 

What I want to focus on is the way that her project repersonalizes the digital medium of communication, adding her handwriting and therefore more of her personality into the messages that she sends. I thought of this project again while watching a talk from Jonathan Zong for the Before and Beyond Typography Online Conference. In his talk, he points out that “writing is a form of identity representation”, with handwriting being “highly individualized and expressive”, while “in contrast, digital writing makes everyone’s writing look the same. People’s communications are filtered through the standardized letterforms of a font.” 

His project that he discusses in part of that talk, Biometric Sans, “elongates letterforms in response to the typing speed of the individual”, thus providing another way to reembody personality into digitally-mediated communications. He describes the font as “a gesture toward the reembodiment of typography, the reintroduction of the hand in digital writing.” It’s an explicit repersonalization of a digitally-mediated communication, in much the same way Cristina Vanko chose to handwrite her text messages to do the same. Both projects seek to repersonalize, and thereby rehumanize, the somewhat coldly standardized digital communication formats that we rely on. 

Without resorting to larger projects, we find other ways to repersonalize our digital communications: sharing stickers (I’m rather fond of Rejoinders), crafting new expressions (lol) and words, and even sending voice responses (at times accidentally) in text messages. In this way we can poke at the boundaries of the digital communication methods sanitized by standardized fonts for all users.

While Jonathan stayed rather focused on the typography mediation of digital communication due to the topic of the conference, I want to expand this notion of repersonalizing the digital communication methods. Fonts are not the only mechanism by which digital communications can be mediated and standardized—the tools that we use to create the text displayed by the fonts do just as much (if not more). 

The tools that mediate and standardize our text in other ways are, of course, automatic correction, predictive text, and the software keyboards themselves.

Apple is frustratingly subtle about automatic correction (autocorrect), oftentimes changing a perfectly legitimate word that you’ve typed into a word with a completely different meaning. It’s likely that autocorrect is attempting to “accelerate” your communications by guessing what you’re trying to type. This guess, mediating your input to alter the output, often interferes with your desired meaning. When this interfering mediation fails (which is often), you’re instead slowed down, forced to identify that your intended input has been unintentionally transformed, fix it, perhaps fix it again, and only then send your message.

Google, meanwhile, more often preemptively mediates your text. Predictive text in Google Mail “helps” you by suggesting commonly-typed words or responses.

Screenshot of Google Mail draft, with the text Here are some suggestions about what I might be typing next.  Do you want to go to the store? Maybe to the movies? What about to the mall?  What do you listen to? Sofi Tukker? What other DJs do you have? Where "have?" is a predictive suggestion and not actually typed.

This is another form of interference (in my mind), distracting you from what you’re actually trying to communicate and instead inserting you into a conflict with the software, fighting a standardized communication suggestion while you seek to express your point (and your personality) with a clear communication. Often, it can be distractingly bland or comical.

Screenshot of google mail smart responses, showing one that says "Thank you, I will do that." another that says "thank you!" and a third that says "Will do, thank you!" In Google Mail, this focus on standardized predictive responses also further perpetuates the notion of email as a “task to be completed” rather than an opportunity to interact, communicate, or share something of yourself with someone else. 

Software keyboards themselves also serve to mediate and effectively standardize digital communications. For me personally, I dislike software keyboards because I’m unable to touchtype on them (Frustrated, I tweeted about this in January). Lacking any hardware feedback or orientation, I frequently have to stare at the keyboard while I’m typing. I’m less able to focus on what I’m trying to say because I’m busy focusing on how to literally type it. This forced slowness, introducing a max speed at which you can communicate your thoughts, effectively forces you to rely on software-enabled shortcuts such as autocorrect, predictive text, or actual programmed shortcuts (such as replacing “omw” with “On my way!”), rather than being able to write or type at the speed of your thoughts (or close to it). Because of this limitation, I often choose to write out more abstract considerations or ideas longhand, or reluctantly open my computer, so that I have the privilege of a direct input-to-output translation without any or extensive software mediation. 

In a talk last June at the SF Public Library, Tom Mullaney discussed the mediation of software keyboards in depth, pointing out that software keyboards (or IMEs as he referred to them) do not serve as mechanical interpreters of what we type, but rather use input methods to transcribe text, and that those input methods can adapt to be more efficient. He used the term “hypography” to talk about the practice of writing when your input does not directly match the output. For example, when you use a programmed shortcut like omw, but also when you seek to type a character that isn’t represented on a key, such as ö, or if you’re typing in a language that uses a non-latin alphabet, a specific sequence of keystrokes to represent a fully-formed character in written text. Your input maps to an output, rather than the output matching the input. 

These inputs are often standardized, allowing you to learn the shortcuts over time and serving the purpose of accelerating your communications, but in the case of autocorrect or predictive text, they’re frequently suffering from new iterations—new words or phrases that interferingly mediate and change a slip up into a skip up, encourage you to respond to an email with a bland “Great, thanks!” or attempt to anticipate the entire rest of your sentence after you’ve only written a few words. Because I also have a German keyboard configured, my predictive text will occasionally “correct” an English typo into a German word, or overcapitalize generic English nouns by mistakenly applying German language rules. 

All of these interfering and distracting mediations that accelerate and decelerate our digital communications, alongside our ongoing efforts to repersonalize those communications, has me wondering: What do we lose when our digital communications are accelerated by expectations of instantaneous responses? What do we lose when they’re decelerated by interfering mediations of autocorrect? What do we lose when our communications are standardized by fonts, predictive text, and suggested responses?

Affective Computing and Adaptive Help

Several months ago, I saw Dr. Rosalind Picard give a talk on Affective Computing. I took notes and thought a lot about what she said but let my thoughts fester rather than follow up on them. Then last week, I read Emotional Design by Donald A. Norman, which reminded me of Dr. Picard’s work and my initial thoughts about affective computing.

There are two elements to affective computing:

  • People interact with technology and devices as though it has a personality (and devices and interfaces without personalities can be distasteful to use).
  • Cameras, wearables, and other technology can be used to determine the emotions and affective responses of a person using technology with surprising accuracy.

Websites and applications are personalized by tracking your browsing history, collecting advertising preferences, device usage, and demographic data. Using affective computing, they could soon be personalized by tracking your emotions.

Continue reading

Language, Music, and Holidays

I am privileged enough to know a second language (although as the years pass, my proficiency is faltering…). The government and the military have a great need for foreign language proficiency for its employees (though apparently that isn’t much of a requirement for U.S. diplomats…). Given their need, they coordinated with the University of Maryland to develop a cognitive test that is supposed to determine how proficient someone can become in a foreign language. It may soon be publicly available, but honestly I don’t know if I’d be interested in taking it. While helpful as an aptitude test for job functions, oftentimes the interest and the attempt at proficiency is a great help for cultural relations with non-American countries. I’d be concerned that a test like this would cause people to give up languages earlier–if they know they’d never become fully proficient, why learn more than the basics or general education requirement?

In terms of making foreign languages more accessible, however, there is also the matter of translations. I’m currently writing about how language and national identity can have a tendency to segment the Internet, but it also has an impact on literature. One man wants to change that, by encouraging others to start their own publishing houses. He did, and focuses primarily on translated works from Russia and Central and Southern America, as he started his publishing house in Dallas, Texas. It’s a great read, with insights about the publishing business and notes about the commonality (or lack thereof) of translated literature in the United States.

Continue reading

The Evolution of Music Listening

Pitchfork recently published a great longform essay on music streaming. It covered the past, history, and present of music streaming, and brought up a lot of great points. These are my reactions.

The piece discussed how “the “omnivore” is the new model for the music connoisseur, and one’s diversity of listening across the high/low spectrum is now seen as the social signal of refined taste.” It would be interesting to study how this omnivority splits across genres, age groups, and affinities. I find myself personally falling into omnivore status, as I am never able to properly define my music taste according to genre, and my musical affinities shift daily, weekly, monthly, with common themes.

Also discussed is the cost of music, whether it be licensing, royalties, or record label advances. Having to deal with the cost of music is a difficult matter. I wonder if I would have been such a voracious consumer of music if I hadn’t grown up with so many free options with the library, the radio, and later, music blogs. Now that I’m older, I make the effort to purchase music when I feel the artist deserves it, but as I distance myself (incidentally, really) from storing music on my computer, that effort becomes less important to expend.

Continue reading