Some mentions in Zeynep Tufekci’s post The Politics of Empathy and the Politics of Technology got me thinking about crisis communications for incident response.
Facebook will have to decide which incidents are “serious and tragic” versus which ones are “ongoing crises” where Safety Check would not be useful. Iraq is not officially at war, but suicide bombings there are almost horrifically routine. Their new policy raises many important questions that should be carefully considered. Will Baghdad bombings be considered endemic? How many in a year to declare something endemic or chronic? Are we just acknowledging that people in the regions of the world suffering from chronic crises have no way to feel “safe”? Who gets to check in? “Useful” as defined as useful to whom? Would you not want a “Safety Check” everyday if your loved one were trapped in a region with a dangerous and fast-moving epidemic like Ebola?
When your platform takes the role of defining a crisis, that’s putting the media in social media. By enabling Safety Check for “unnatural” disasters, it stops being a nifty feature and becomes an essential tool for communication during a crisis.
Activating Safety Check constantly would lessen its value as a signal. Right now, it functions as a forceful push, and you get a “notify” on your phone when a friend in the affected area checks in as safe. Getting hundreds of these notifications per day would reduce its efficacy. However, not getting the notification when you were worried about someone would also be a problem. This type of system requires decisions to be made about when to activate, and when to hold back.
Avoiding alert fatigue is key when it comes to crisis communications. Facebook is both defining the crisis and communicating about it. It’s easy to send a message to millions of people when they log into your service on a daily (or even hourly) basis. It isn’t easy to do it well.