×

Navigation

January 19, 2016

Our current atmosphere of digital connectedness has spawned innovative new ways to help citizens feel secure in their surroundings. Unsure of the safety of an area you’re visiting? There’s an app for that. See a suspicious character lurking around your neighbor’s home? There’s an app for that. Witness a crime taking place? Never fear—a new wave of mobile apps allows you to be aware of, and alert authorities to, suspicious characters in neighborhoods, stores and other venues through real-time tracking and user-reported incidents. Their goal is to keep you safer by providing notice of unsafe areas and criminal activity as well alerting authorities and other community members of crime-related incidents. Upon initial examination, this seems like a useful and efficient way to deter crime and increase personal safety, but it appears these crime-fighting superhero apps have a dark side.

There’s no doubt that apps like CrimePush can help citizens get fast assistance in the midst of a crime. This particular app sends authorities the location, photo, video, audio and text description of the crime at the push of a button. Similar technology is also in use at police departments like the Virginia State Police to encourage citizens to submit anonymous tips about suspicious behavior to the police for follow-up. Apps like SketchFactor (which has since been removed from the market due to controversy concerning racial profiling) and GhettoTracker (also removed) mainly targeted geographical areas with unsafe reputations, although SketchFactor allowed reporting of individuals. Other mobile tools like Nextdoor and GroupMe help connect community and business members with one another and with local authorities to monitor local criminal activity and perceived threats.

Unfortunately, “perceived” is the operative word. While these apps allow members to alert one another to suspicious activity, they have also seemingly opened the door to a McCarthy-era level of racial bias. In a recent example, businesses and residents of Georgetown, an affluent neighborhood outside of Washington, D.C., used the GroupMe app in an attempt to curtail the area’s growing shoplifting problem. CBS News reported allegations in October of this year that the group was racially profiling African American shoppers, since over 72 percent of the “suspicious individual” GroupMe reports targeted African Americans. Joe Sternlieb, a representative of the Georgetown Business Improvement District, defended the group, noting that less than 5 percent of the African American individuals identified on GroupMe were arrested. As further evidence of his community’s neutrality, he explained that group members who post inappropriate content are either told to work within the specified rules or they are kicked off the app. He did not mention what “inappropriate content” was, and he did not expand on whether or not the remaining 68 percent of African Americans tagged in reports but not arrested were approached by the police. After the controversy was reported in the media, the group discontinued use of the app.

Georgetown is one of the “whitest” neighborhoods in the D.C. area, with over 85 percent of the population reported as Caucasian and just over 3 percent African American, as opposed to the neighboring District of Columbia with its 38 percent Caucasian and 50 percent African American populations, respectively. With the low level of Caucasian population density in Georgetown, a black individual would be easy to notice and might seem out of place. However, the African Americans that live in Georgetown are, like their white neighbors, affluent, well educated, and law-abiding. Leslie Hinkson, a Georgetown University associate professor of sociology, explains: “Crime does occur in Georgetown. And quite often when people describe the perpetrators of those crimes, they’re usually young men of color. But that doesn’t mean every person of color is an automatic suspect.” One February incident underscores her statement. An employee at a Georgetown retail establishment took a photo of a tall, well-dressed African American man that he described as “…Very suspicious, looking everywhere.” Later, an employee at another store responded, “He was just in Suitsupply. Made a purchase of several suits and some gloves.”

As demonstrated through the prior example, apps like these can quickly become a forum to unfairly categorize members of another race or socioeconomic status as dangerous or sketchy. For instance, riders on San Francisco’s Bay Area Rapid Transit system (BART) can use a BART-created app for iOS and Android called BART Watch that allows them to report suspicious activity, crimes, and other unwanted behavior to authorities instantly. When a local newspaper, the East Bay Express, requested a month’s worth of these complaints they found that there was a disproportionate number of reports aimed at blacks. Approximately 68 percent of the complaints that included a description referenced blacks. Interestingly, only 10 percent of BART ridership is attributed to blacks, with whites and Asians making up the majority of the remainder. Also, many of the “offenses” included in the report were relatively benign activities such as playing loud music, smelling bad, and taking up more than one seat. Zachary Norris, the executive director of the Ella Baker Center for Human Rights, decries the app, noting that, “By encouraging passengers to report these types of complaints, BART is furthering our punishment economy, wherein we find punitive solutions to social problems that actually require reinvestment in communities.”

While many of these apps have a polarizing effect on demographically separate groups, at least one was created in response to an already sensitive social situation. Hollaback—an app designed to reduce street harassment aimed at women, people of color, and the LGBT community—allows real-time reporting of incidents with a location map of the occurrence. Problem is, there is no strict definition of street harassment. A well-meaning compliment to one may be a serious infraction to another. Also, reports may be fabricated, exaggerated, or created in an attempt to hassle another individual and even purposefully get them in trouble with authorities. In fact, the argument that Hollaback may overlook the harassment issues of men, mainly white or straight men, has surfaced on social chatrooms like Reddit, demonstrating that apps that call out (or leave out) some segment of society are at risk for fomenting social discontent. Finally, any app that relies upon communal reporting may also contribute to the proliferation of vigilante-style justice, where community members take matters into their hands based on a mobile report.

Evidence such as the Georgetown incidents show that in some cases these apps have a way of marginalizing certain members of society. They can also depersonalize the impact that anonymous reports, with their subsequent follow-up investigation by authorities, can have upon innocent persons. The apps emphasize people’s tendencies to fear what is different and allow individuals with deep-seated anger toward another ethnicity, religion, age, gender or sexual preference to harass others through erroneous reports of dangerous activity.

While they may seem like a reasonable way to keep an eye on crime, there are flaws in the design of these mobile group reporting apps, which can contribute to a more significant racial divide. In a speech to Georgetown University students earlier this year, F.B.I. Director James Comey spoke candidly about racial tension and overcoming bias. Importantly, he noted that racial bias and misunderstanding run both ways and to overcome it, people need to see and understand one another. “It’s hard to hate up close,” he explained in a question-and-answer session following his speech. Unfortunately, apps like these—with their snarky digital anonymity—allow prejudice and misunderstanding to snowball as the accuser, accused, and authority figures are even further disconnected from one another. Anonymity may protect the informant, but it can also enable emotional distance and contribute to incident exaggeration. An article in Forbes noted that the CrimePush application lets users report crime anonymously “so that they may continue with their busy lives knowing that with a push of a button, police will know and have everything to pursue the criminal.” This cavalier attitude toward situational reporting minimizes the significance reports like these can have on innocent individuals.

We mustn’t forget the pluses in this crime prevention app equation: sometimes having a mobile reporting app saves lives and property. A neighborhood in Arizona that used the Nextdoor app to keep tabs on criminal activity saw their rate of burglaries plummet. A community in Indianapolis was able to help authorities apprehend a group of burglary suspects through the use of NextDoor.

There is no doubt that there is a need for apps that can send help to victims in distress, allow crime reporting on-the-fly, keep neighbors and businesses aware of suspicious activity in their area, and let travelers know where it is safe to trek. But app developers need to have an awareness of the social risks and costs of this type of anonymous, instantaneous reporting. They should engage with lawmakers, citizens and law enforcement authorities to determine and build in fail-safes that reduce false reports and discourage, perhaps even penalize, biased targeting.


Nikki Williams
is a bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at nbwilliamsbooks.com.