×

Navigation

Are You Facebook's Guinea Pig?

 

Ever started typing a Facebook status update, only to think better of it and hit backspace? I know I have. Facebook knows, too, because the social media monolith recently analyzed these “aborted” status updates from nearly 4 million of its users, then published a research studyabout what we’re doing — or not doing, to be more precise. That’s right: Facebook may be using you for research without your explicit consent.

“Why u mad?” Facebook might respond in Internet parlance. After all, its researchers used the data anonymously, and you opted in when you signed up … kind of. Facebook’s Data Use Policy— a 9,000-word behemoth you probably never read when you joined — says, "We receive data about you whenever you use or are running Facebook," and that information could be used for "internal operations, including troubleshooting, data analysis, testing, research and service improvement."

But anybody who’s taken Psych 101 knows the first step of a research study is obtaining informed consent from participants. Burying this wording deep in a document most users don’t even skim is ethically questionable, especially at a time like this, when Americans are particularly protective of their privacy. According to a new poll by the Associated Press and GfK, Germany's largest market research institute, “61 percent [of respondents] said they prioritize protecting Americans’ rights and freedoms over making sure Americans are safe from terrorists,” the Boston Globe wrote recently. (This is up 2 percent from a similar poll five months ago.) And a research study on Internet user behavior is a far cry from spying on Americans to keep the nation safe.

So in the midst of a seemingly unending assault against personal privacy, how can companies like Facebook respect privacy concerns and proceed with research in a more ethical way?

Was it unethical?

I asked Dr. Annette N. Markham, a communications professor at Aarhus University and Loyola University who has written codes of ethics for Internet research, whether she thought Facebook’s behavior was ethical.

“This is precisely the challenging question, since it points to the difference between a social network site's legal rights and ethical practices. These are not always the same,” Markham replied via email. “In this case, Facebook has the right to use whatever data they collect since the terms of service allow them access. Was it the right thing to do? The answer will vary depending on who's being asked.”

If you ask writer Sean Rintel, he’ll say no. As Rintel recently wrote in Business Spectator, “This is at best a form of passive informed consent. The MIT Technology Review proposes that consent should be both active and real time in the age of big data.” As Facebook was founded in February 2004, that gives users a decade to forget what they agreed to — hardly ethical.

Why informed consent matters

The point of informed consent, writes Markham in an email, is that study participants know what they’re getting into before the research starts. She cites the Tuskegee syphilis experiment of the 1930s-’70s as evidence that research participants can suffer if they aren’t told what’s going on. In that experiment, 128 test subjects — poor African-American men from rural Alabama — died because the U.S. Public Health Service researchers never told them they had syphilis, nor were they treated for it, even though penicillin was available. In this case, lack of informed consent literally killed people.

Although no one is going to die because they didn’t realize Facebook is using their data, the Tuskegee syphilis experiment is a good reminder of why informed consent is so important. Its goal as a first step in research “is to preserve the autonomy of the participant and give them respect,” Markham explains. “Giving them adequate information to make an informed decision was the key. Here, we can and should take Facebook to task for not being clear about the intended use of data. Their defense that the data was anonymous is not adequate since this is not the ethical obligation under dispute.”

The irony of the situation

Facebook’s research paper, “Self-Censorship on Facebook,” discusses who is most likely to erase something before posting it, and under what circumstances. For example, the researchers found that men on Facebook censor themselves more than women do, especially if the guy has mostly male friends. Part of the researchers’ motivation, it seems, was to find out how to prevent self-censorship, because that means “the SNS [social networking site] loses value from the lack of content generation.” After all, Facebook can’t profit off you if you aren’t using the site.

But the very essence of an aborted Facebook status means users didn’t want anyone to see it: not their friends and not the Facebook overlords. People have heard “The internet is forever” and know that cached versions of web pages linger on after the original content is deleted. But it seems almost cruel that even our half-formed thoughts, updates that never saw the light of day, cannot stay entirely personal. “Is passive informed consent to collect data on technical interactions sufficient when users have actively chosen not to make content socially available?” asks Business Spectator. I say no. Arguably, most people don’t realize that things they type and then erase are still recorded; I believe the blame lies with Facebook for not clarifying this. Markham seems to agree. “The question is not whether the user consented to give Facebook the right to use all data, it's whether the use of such data as keyboard actions and backspacing was clearly understood by the user to be a form of data that would be collected,” she writes via email.

“In this case, it may not be a legal as much as [an] ethical question of responsibility and fairness,” she continues. “Frankly, we should continue to question the questionable ethics of creating lengthy and complicated blanket TOS [terms of service] that cover all manner of things, when everyone knows it's common practice to simply accept these without close scrutiny.” Indeed, Facebook's latest research seems symptomatic of a culture of websites intentionally employing a long Terms of Use page that they know very few users will read. So what’s a more ethical alternative?

A better way forward

Edward Snowden opened Americans’ eyes to the fact that our data isn’t private. To be sure, some respond with blasé resignation and references to George Orwell. But others are outraged and feel even more fiercely possessive of their personal information — particularly keystrokes they thought had been erased forever. Companies like Facebook need to understand that transparency around their privacy policies will not only boost user trust and prevent scandals and backlash down the line; it’s also simply more ethical.

Moving forward, Facebook and other sites should start employing a much more explicit terms of use, as well as real-time informed consent before launching a research study. One good example is what happens when you authorize an app through Twitter. The Twitter user sees a screen very clearly spelling out what the app will and will not be able to do. For example, Twitpic will be able to post tweets for you, but it will not have access to your Twitter password. The list is short and easy to understand, and users know exactly what they’re getting into.

Rather than the arguably underhanded (ethically, if not legally) data collection process for its recent research study, Facebook should employ a similarly straightforward notification process for future studies. Select users could see a pop-up explaining that they’ve been randomly selected to take part in a study over the following two weeks, in which their data will be used completely anonymously. This at least would give users the choice of opting out. Isn’t that something everyone deserves?

As Jeffrey Rayport wrote in the MIT Technology Review, Big Data giants like Facebook should adhere to “something akin to the Golden Rule: ‘Do unto the data of others as you would have them do unto yours.’ That kind of thinking might go a long way toward creating the kind of digital world we want — and deserve.”

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.

Bot Journalism: Who’s Writing That Piece You’re Reading?

 

Back in the heyday of the journalistic newsroom, Walter Cronkite reigned supreme as “the most trusted man in America.” Millions flocked to their television screens daily to hear him report on current events and whatever they heard, they took to heart. His compatriots in print journalism include names such as David Warsh, economic and political commentator for Forbes magazine and The Boston Globe; Anna Quindlen, the social and political commentator for The New York Times who won a Pulitzer in 1992; and Alix Freedman, who won the 1996 Pulitzer Prize for National Reporting and recently left The Wall Street Journal to become the ethics editor at Reuters. Even if these names are not familiar to you, you probably have a byline that you search for in print or online when you want news you can trust to be accurate. Over time, we form relationships with the individuals that bring us the news, relying on some more than others for their timely reporting, strict accuracy, insightful analysis or even, perhaps, their sense of humor.

Over the years, print and television journalists have enjoyed a friendly contest, each aiming to be at the forefront of news reporting by garnering more readers or viewers. But now print journalists have new competition: the writer-bots. Like the insidious threat in a sci-fi flick, these algorithm-based bot reporters are infiltrating the ranks of paper and online journalists with alarming speed. The really frightening part is that your favorite reporter could be a bot and you’ll never even know it.

Take journalist Ken Schwenke’s byline. It appears throughout the Los Angeles Times hovering over stories he didn’t write. Well, at least not technically although perhaps “technical” is precisely the word to describe a bot-written article. Mr. Schwenke has composed a special algorithm, called a “bot,” that takes available data, arranges it, then configures it for publication instantly, accurately and without his further intervention. His bot is specific to earthquake data, so when a quake or tremor occurs, the program gathers the available information, compiles it in a readable format and publishes it in the Times under Mr. Schwenke’s name, sometimes before he’s even had his morning coffee.

This kind of computerized reporting was first revealed a few years ago when a company called Narrative Science made headlines – literally. Its groundbreaking technology allowed customers to set up algorithmic newswires that would create automatic news articles from available data without the need for a flesh-and-blood writer. Initially, Narrative Science focused on sports statistics and stories, but they’ve since branched out into financial reporting, real estate writing and other industries.

Currently, Narrative Science’s technology produces a bot-driven story every 30 seconds or so. These articles are published everywhere, from highly regarded publications such as Forbes to the myriad widely known and virtually unknown outlets, some of whom are keeping their bot-story consumption on the down-low. While the company’s CTO and co-founder, Kristian Hammond, claims that robonews will not soon replace flesh-and-blood reporters, he does predict with dire certainty that a computer will win the Pulitzer Prize for writing in the next five years.

For a news agency or publisher, part of the draw of bot-based journalism is the lure of cheap writing labor. Narrative Science’s bot journalists can undercut even the most underpaid human writer. Here’s an example of one of their pieces for Builder magazine:

"New home sales dipped year-over-year in May in the New York, NY market, but the percentage decline, which was less severe than in April 2011, seemed to be signaling market improvement. There was a 7.7% decline in new home sales from a year earlier. This came after a 21.6% drop year-over-year last month.

In the 12 months ending May 2011, there were 10,711 new home sales, down from an annualized 10,789 in April.

As a percentage of overall housing sales, new home sales accounted for 11.4%. This is an increase on a percentage basis, as new home sales were 9.5% of total sales a year ago. Following a year-over-year decline last month, sales of new and existing homes also sank year-over-year in May."

While this isn’t exactly a stimulating read or even that well-written, it isn’t terrible. Include the fact that a piece like this will cost around $10 for a 500 word article, while hiring a writer from one of the biggest online content mills, Demand Studios, will set you back $7.50 to $20, with an average article costing $15, and you have a formula, or perhaps an algorithm, for success.

Mr. Hammond says Narrative Science is simply filling the need for these figure-laden accounts of news that no journalist is currently covering while freeing up the reporting staff of their clients to do more in-depth research or analyze more complex data. While this may be true, I know at least a hundred would-be journalists that would jump at the chance to score a gig writing a recap of a Big Ten basketball game, a summary of trending investment strategies or a review of a local theater performance.

I would also bet that a human journalist would be able to inject some excitement into that real estate article, above, although Hammond points out that Narrative Science’s technology has now advanced to let clients choose a “voice” for their stories, giving them a tone from anything from sardonic humor to pedantic narration. He believes so ardently in his technology’s burgeoning capabilities that he estimated that computers would write more than 90 percent of the news within the next 15 years.

And they are well on their way to that goal. The New York Timesis one of the 30-some large publishing clients – including trade publisher Hanley Wood and sports journalism site “The Big Ten Network” – that subscribe to Narrative Science’s technology for stories. Concurrently, some media outlets like the Washington Post are using robot fact-checkers to double-check their data before publication. The Post’sprogram, Truth Teller, uses voice-to-text technology to transcribe speeches and cross-check claims against a database of information. The Post’s executive producer for digital news, Cory Haik, claims the goal is to “get closer to … real time than what we have now. It’s about robots helping us to do better journalism – but still with journalists.”

While it’s true that robot fact-checkers can work much more quickly than their human counterparts, their mathematically driven methods only allow them to read data as white or black. The shades of gray – nuances of speech and infinitesimal manipulations of data by clever statisticians that are easily discerned by a human journalist are lost to them. For example, in a speech given by Bill Clinton at the Democratic National Convention in 2012, he wasn’t fibbing when he said, “In the past 29 months, our economy has produced about 4 ½ million private sector jobs.” What he did do was obscure the truth by carefully setting his threshold of data at the 29-month mark. If he’d added just a few more months, the economic growth under Obama’s management would not have had that rosy look he was going for. A robot might not be able to see through that cunning rhetoric, but a person, like FactCheck.org’s Robert Farley, did.

The notion of bot-produced journalism is a polarizing concept for writers, editors and consumers alike. Craig Silverman, a writer for Poynter, lauds journalism bots, claiming that they are doing the grunt work, leaving the context and narrative to “real” journalists. He writes with starry-eyed passion, gushing about the potential for robots to help writers through superior semantic awareness and the ability to flag inconsistencies in previous reporting.

Emily Bell, professor of professional practice at Columbia Journalism School and director of the Tow Center for Digital Journalism, echoes his thoughts, but adds:

"Journalism by numbers does not mean ceding human process to the bots. Every algorithm, however it is written, contains human, and therefore editorial, judgments. The decisions made about what data to include and exclude adds a layer of perspective to the information provided. There must be transparency and a set of editorial standards underpinning the data collection."

Ah, that’s the real issue – transparency. Is it ethical to put a human byline on a bot-generated story when the byline represents someone that readers have come to know and trust, à la Walter Cronkite? To me, it is unequivocally dishonest to publish a story by a bot under a human byline. In my estimation this amounts to nothing more than plagiarism of the worst kind, in which not only is the original author (the bot or the bot’s creator) not credited, but the consumer of the article is duped into believing that a human being has carefully researched, compiled and checked the facts in the article and will stand behind them. Who is liable for an error produced by a machine-generated story? The writer whose byline appears? His editor? The bot?

The Society of Professional Journalists publishes a code of ethics that for years has been used by thousands of journalists, and in classrooms and newsrooms as guidelines for ethical decision-making. Among its criteria is the enjoinder to clarify and explain news coverage and invite dialogue; to show compassion to those who might be adversely affected by news coverage; and to recognize that private individuals have a greater right to control information about themselves than do public figures. I am not sure if an algorithm has the capacity for compassion, the ability to invite dialogue or the cognizance of the difference between public and private figures that a human writer does, and this lack definitely puts the writer-bots outside the strictures of modern journalistic code. Add to that the fact that you can’t be certain who (or what) is standing behind that byline and you have the potential for an anarchic, and untrustworthy, approach to news-gathering and dissemination.

Perhaps the problem brought to light by journalism bots goes beyond transparency issues. The trend toward public acceptance of fill-in-the-blank, impersonal reporting is like a subtle mind-numbing disease brought on by continual exposure to the cult of instant gratification perpetuated by the digital landscape. Could the fact that we’ve become so inured to snippets of brief, emotionless data make it easy for these bots to be successful in reproducing (and stealthily replacing) the stories of their journalistic human counterparts? Are our own standards of compelling, telling journalism being compromised to get more hits and claim a higher position in the search engine hierarchy? Are we losing appreciation for long-form content that requires immersion, thoughtful consideration and analysis?

Ken Schwenke was on to something when he blithely admitted that many people would never even pick up on the fact that they are reading robot-driven content, but inadvertently he has touched upon the real problem behind robonews. We are entering a new era of reporting where you can no longer rely on a flesh-and-blood journalist’s ethics, honesty and integrity. In fact, you can’t rely on the authenticity of the byline at all since at any given time you could be reading the musings of an algorithm-based writer-bot rather than a journalist you know and trust. Rest in peace, Walter, rest in peace.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Bitcoin’s Radical Ethics of Trust

 

“The root problem with conventional currencies is all the trust that’s required to make it work.” — Satoshi Nakamoto, writing in an online forum (2009).

“The State will of course try to slow or halt the spread of this technology, citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration. Many of these concerns will be valid; crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded. An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion. Various criminal and foreign elements will be active users of CryptoNet. But this will not halt the spread of crypto anarchy.” — Timothy May, The Crypto-Anarchist Manifesto (1992).

"All currencies involve some measure of consensual hallucination, but Bitcoin … involves more than most." — The Economist (2013)

Every transaction is about trust. As people in supermarkets swipe glossy plastic debit cards to buy groceries, we assume the machine will extract the right amount from our bank accounts. We believe the bank won’t run out of money. We exchange things for paper and electronic promises, trusting they are of equal value. And when we exchange goods, services, or products for currency, we often assume this currency’s value is relatively stable — that we will be able to spend it later without a problem. When using currency, we must trust that other people will continue to believe it has worth.

These moments of trust are stacked on a foundational belief in a central financial authority, something that regulates the value of our currency. Bitcoin, the nascent digital currency, seeks to change this belief that trust in a centralized, state-sanctioned regulator. The creator expressly sought to make a currency impervious to questions of faith in the system.

But Bitcoin is about trust, even as it was originally intended to be the only currency that doesn’t require it. In fact, the premise of the currency is rooted in the notion that the state or a hierarchal organization is not necessary to govern the behavior of a group.

Bitcoin is designed as peer-to-peer, which removes the need for mediating central financial institutions. When you trade a Bitcoin for something, it goes straight from your online wallet to another person’s or group’s online wallet, with a mathematical system in place to prevent double spending. It is intended to be used just like other currencies — as a medium of exchange. It is still too new to be very liquid as an asset, but that’s changing — a swelling number of businesses accept Bitcoin. Though the price remains wildly volatile, sometimes swinging hundreds of dollars in a month, Bitcoin’s liquidity continues to grow as more merchants and organizations accept it as a form of payment. Many believe it has the potential to become a full-fledged alternative to state-backed currencies.

Although Bitcoin is designed to be used like other money, it is fundamentally a different type of currency than anything that came before it, untethered to a central issuer and anchored in a radically trusting way to cryptography and mathematics. Though its creators endeavored to sever the tie between money and the state, their creation still requires a faith in the collective.

Bitcoin was developed by a mysterious creator or group of creators called Satoshi Nakamoto, who (or which) outlined how to use it in an academic paper in 2009. “Satoshi Nakamoto” is likely a pseudonym, and the creator(s) of Bitcoin remain essentially anonymous.

In the paper on Bitcoin, Nakamoto describes Bitcoin as a currency that doesn’t require trust, a currency so rooted in cryptographic proof it doesn’t rely on the good intent of a governing institution. It’s supposed to be a tool to avoid a central authority acting in bad faith, an elegant solution to our saggy, bloated, abuse-riddled global financial system. Bitcoin trusts a decentralized network to organize the currency, instead of a centralized state. It assumes a currency ruled by a computer protocol is as workable as a currency controlled by a centralized authority. For Bitcoin to work in the long-term, it requires an enormous amount of trust, and respect of that trust, from the parties using Bitcoin.

Though it’s now a magnet for venture capitalists, Bitcoin began as an experiment nurtured by people interested in establishing an alternative economic and political system divorced from the current global capitalist ecosystem. Bitcoin’s rise in popularity corresponds to a growing skepticism about the future of state-controlled currencies. As incidents like the failure of the banking system in Cyprus make the weaknesses in the current global banking system more apparent, the need for a viable alternative has grown more apparent, leaving room for a once-niche idea to gain mainstream recognition.

Many of Bitcoin’s enthusiastic supporters characterize the use of the crypto-currency as a political act, because it offers a radical alternate to state-supported currencies, the currencies people are indoctrinated to consider specially legitimate. Cody Wilson, who told Vice he wants to develop an ethics of Bitcoin, is one such supporter. Wilson is a crypto-anarchist who values Bitcoin for its potential to allow anonymous transactions that cannot be traced by the government. Wilson is developing a digital wallet system called DarkWallet to facilitate this anonymity, since the current iteration of Bitcoin isn’t as anonymous as many crypto-anarchist and libertarian supporters would like.

Crypto-anarchism is thought to have inspired Bitcoin, particularly a piece by Timothy May called “The Crypto-Anarchist Manifesto.” Crypto-anarchism is a version of anarchism that embraces cryptology software to wrest power from the state. Anonymity is a treasured tool used to evade the state. This school of thought serves as Bitcoin’s philosophical scaffolding.

Anonymity isn’t prized by crypto-anarchists because it helps illegal activities, but that’s certainly one of the side-effects. And Bitcoin, as a decentralized cryptographic currency, is anonymous enough to have served as the primary payment system for the Silk Road, the Internet’s former largest drug market on the dark web. The Silk Road was shut down by the FBI in 2013, but not before billions of dollars worth of Bitcoin were exchanged for drugs. The association with illicit activity gave Bitcoin notoriety, which helped fuel adoption, since it raised the currency’s profile — but it shouldn’t be conflated with Bitcoin’s ethics.

Bitcoin’s affiliation with illegal web ventures like the Silk Road should not color the currency a murky moral shade; like cash, it is harder to track and thus more easy to use when you want to stay off the grid, but that says more about the spender than what he’s spending. While Bitcoin’s conception was politically motivated, as a unit of currency — an individual, intangible bitcoin — is an empty vessel devoid of viscerality; its only meaning what we confer upon it. A bitcoin is just an idea. The worth and meaning of a bitcoin is whatever we assign it. And the philosophical underpinnings of Bitcoin’s community are not about keeping the currency anonymous specifically to commit crimes; it’s about avoiding surveillance.

However, the importance the community places on avoiding surveillance has a dangerous side. Bitcoin’s ideological underpinnings belong to crypto-anarchists like Wilson, but the currency’s operational scaffolding is increasingly the property of venture capitalists, speculators and people who just want to generate capital, never mind what kind of capital it is. This co-option is a sign of Bitcoin entering the mainstream, but purists are concerned it will water the currency down. Wilson told a crowd in London that the capitalist interest could essentially neuter Bitcoin as a political currency, which is true. As I said before, despite the politics of its creator and its early adopters, Bitcoin is not an inherently political unit, and the lack of central authority was an originating feature but may not end up as a defining one if the currency continues to be adopted by large institutions and the players in the global financial market.

Perhaps if Bitcoin was primarily adopted by people with inflexible, ideologically consistent guiding principles for how to use it, the currency could be considered “ethical.” Perhaps if the mysterious Satoshi Nakamoto had regulation powers, it could become the currency some people want it to be. But Bitcoin has no regulations. It’s the ideal currency for the world’s worst people.

Bitcoin’s separation from the state — the thing that makes it a politically and pragmatically superior form of currency in the eyes of crypto-anarchists — could be what makes it dangerous for ordinary people to use. Because it is deregulated, abuses of the system can occur without penalty. Thefts of life-savings can go unpunished. So that’s a problem and it will continue to be a problem, because a lack of regulation is built into the idea of Bitcoin. And while major Bitcoin heists have thus far been relegated to places on the dark web, where there is little sympathy for the de-moneyed, as it grows in mainstream use, the utter lack of a safety net could cause devastation.

Of course, Bitcoin is hardly the wide world’s lone currency with the potential to be abused. The global financial system is rife with abuses, and perpetrators of deeply unfair practices often (one might argue, usually) get away with it. So it’s not logical to condemn Bitcoin for its potential for abuses without acknowledging that even supposedly regulated financial systems have been thoroughly rigged. However, it’s equally illogical to ignore Bitcoin’s vulnerability, or that it’s vulnerability comes part and parcel with its borderline-naive ethics of trust.

Without condemning Bitcoin for its potential to be abused, it is both possible and necessary to point out that the currency will only work if the trust users place in their fellow members of this financial ecosystem is upheld, and the utter lack of incentive to uphold it is frightening. For Bitcoin to avoid falling prey to corporate, capitalist groups that want to figure out ways to game both the mining system and other Bitcoin users, there are no regulatory safeguards. The underlying ethics of the currency’s founders, alongside Bitcoin’s crypto-anarchist support system, insists that users can govern themselves better than the state — but this techno-utopian thinking requires a deep trust that, I fear, may be breached as the currency becomes enmeshed in the current economic system.

This nearly utopian vision of a perfect currency is almost surely going to be corrupted further, either by getting co-opted by the financial system it sought to disrupt, or by being cast aside. But the crusade of Bitcoin’s founder and crypto-anarchist backers has admirable intentions, and Bitcoin may serve as a template for a more perfect workable version of currency in the future.

 

Kate Knibbs

Kate Knibbs is a writer and web culture journalist from the southwest side of Chicago. She probably spends too much time on the Internet.

Blameless: Vigilante Justice in Steubenville

 

The night of August 11, 2012, was inconsequential for most in Steubenville, Ohio. It was a warm, balmy Saturday night. With the beginning of the school year and a new football season quickly approaching, it was to be a night of celebration for high school students in Steubenville. And for many involved in the events that transpired, it was a celebratory time. But for one 16-year-old girl from Weirton, a neighboring town in West Virginia, it would be life-shattering.

This young girl was raped by two high school football players from Steubenville that night. She was intoxicated at the time of the assaults, transported unconsciously and carried by the perpetrators from party to party. She was first assaulted in the backseat of a car on the way to a witness’ home. One of the assailants, Trent Mays, vaginally penetrated the girl with his fingers after taking her shirt off and exposing her breasts, all the while his friends photographed and videotaped the incident. When they arrived at the witness’ house, Mays attempted to orally rape the girl by forcing his penis into her mouth. A second assailant Ma’lik Richmond, also penetrated the girl’s naked, unconscious body with his fingers. The girl testified that she remembered only a brief moment of the night in question, in which she was vomiting in the street. She awoke naked, confused and unaware of the violations that occurred the preceding night, but deeply worried that something horrible had happened.

Almost as disgusting as the acts themselves were the comments made via text and social networking sites such as Twitter, YouTube and Facebook. “I shoulda raped her now that everybody thinks I did,” Mays texted friends. Michael Nodianos, a former Steubenville baseball player, tweeted, “Some girls just deserve to be peed on,” in reference to the victim. This statement was retweeted by none other than Mays himself, along with several other peers. “I have no sympathy for whores,” another friend tweeted. A short video was later posted on YouTube, in which Nodianos and his friends discuss the incident in a lighthearted, jovial tone. “They raped her quicker than Mike Tyson raped that one girl,” Nondianos joked. “They peed on her. That’s how you know she’s dead, because someone pissed on her.” In another text, Mays described the 16-year-old girl as “deader than Caylee Anthony.” A photograph posted on Instagram showed the victim, seemingly unconscious, being carried by her wrists and ankles by two teenage boys.

While social media served to humiliate the victim, it also played an integral role in exposing the crimes of these men and the subsequent cover-up. It’s also clear that a large amount of national outrage has been fostered, in part due to the efforts of Anonymous, particularly KnightSec, an offshoot of the hacker collective. In December 2012, KnightSec hacked into an unaffiliated website and posted a message, demanding that the school officials and local authorities involved in the cover-up come forward and apologize for their actions. Anonymous released a subsequent video on the web, threatening to leak the names of alleged participants if these demands were not met. Deric Lostutter, one of the Anonymous hackers involved in the incident was later raided by the FBI.

Both Mays and Richmond were convicted as minors on March 17, 2013. Both were given the minimum sentences for their crimes. Richmond was released in early January due to good behavior, having served less than a year. Mays was given two years in juvenile detention on the count of his possession and dissemination of illicit pictures of the underage girl, constituting child pornography. If these sentences seem astonishingly light considering the circumstances, it's because they are. The seriousness of the rape charges in this case is further undermined by the fact that Lostutter, the very activist who leaked evidence that led to the rapists’ conviction, now faces more prison time than the rapists he helped to expose.

The alleged cover-up is still being investigated. A special grand jury was convened to determine if the coach and other school officials were involved. A series of indictments followed. Thus far, several officials have been indicted with crimes such as obstruction of justice, evidence tampering and perjury. Astonishingly, Steubenville City Schools’ superintendent Michael McVey was indicted with several chargesrelated to an entirely different rape case.

Rape is not an easy topic to write about. For the victim, it is a complete loss of control. It is a sick, horrendous violation of personhood. For the attacker, it’s a thoughtless fulfillment of the basest of sexual urges. It is a blatant lack of human recognition, a proverbial spitting in the face of everything upon which society is founded. It is indefensible. But of course, it’s more than that. Beyond the conceptual notion of rape is the actuality, an unspoken truth that one simply cannot express the horror and despair that comes along with such a traumatic, shattering event, all the more so when the victim is forced to relive the horror via social networking gossip. But that doesn’t mean we shouldn’t talk about it.

Philosopher Immanuel Kant formulated the following maxim: "Act in such a way that you treat humanity, whether in your own person or in that of another, always at the same time as an end and never merely as a means". And what is rape but the ultimate betrayal of this very idea? Rape is treating a person as a means to an end — in this instance, sexual gratification — rather than an end in and of his or herself. It sounds abstract, but it really isn’t. It is a practical, almost intuitive sense of morality that dictates we must treat one another as people, not as objects, play things for mere amusement. So why is it that I had never heard of this maxim until my sophomore year in college? Why is it that boys are not routinely instilled with this value? Instead, they are taught that might makes right, that physical prowess is the key to life. They are instilled with the idea that a woman’s place is on the sidelines, that women are secondary to men, essentially.

In the clash between privacy and justice, where is the line? To what ends can one justify breaking the law, when it is indeed the law itself that is failing our society — or perhaps the human application of law. It is, after all, a human world in which we live, full of imperfect beings with their own biases and preconceptions. That is why we have a justice system with a clear set of standards in which the accused are innocent until proven guilty. But in a town where football players are routinely held up as the masculine standard, is it any wonder that something like this occurred? When culture gives such gravitas to a mere game, heralding boys as heroes, cherishing physicality over intellect, it is a sure thing that monsters will be created. That is what’s occurring all over the country. When a town rallies behind rapists, indeed defending their beloved football players despite an overwhelming heap of evidence, it is clear that the game is rigged.

The morally relevant distinction between child and adult here is not the number of years these men have lived on this earth, but perhaps a quality of innocence distinctly lacking in them, as evidenced by their crimes. These high school football players committed heinous, adult-natured crimes and should have been tried as such. Their remorse is irrelevant. Their future football careers are completely beside the point. The damage they caused this girl cannot be undone. They should be in prison for a good portion of their lives and they should be labeled as sex offenders wherever they go for the entirety of their small existence. This is what ought to happen to rapists. Instead, one is free and the other will be incarcerated for merely another year, while a Good Samaritan faces potentially 10 years in prison for helping to shed light on their despicable acts.

A culture that condones rape is simply not worth preserving. A society that defends rapists and blames victims is one that ought to be admonished. That is a wrong that Anonymous attempted to make right in Steubenville. When the justice system fails, it creates a need for vigilantism. And let’s be clear: The justice system in Steubenville has failed tremendously. This is precisely what compelled KnightSec to act. But considering the circumstances, this ought not be considered a victory for Anonymous, but rather a small outflanking of sorts. When a convicted rapist goes free after less than a year, something is terribly wrong. But the paradigm is changing. We are taking part in an age that has the potential for moral evolution beyond rape culture, beyond the victim blaming, beyond the rape apologists. Unfortunately for the victim, it isn’t changing quickly enough. She will go about life knowing that an entire town valued athletics over her welfare. There are no words for that kind of anguish.

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

Can We Blame the Cloud?

 

Editor’s note: In the wake of the scandal surrounding the hacking of nude celebrity photos, the CDEP features a series of essays analyzing the actions of the various actors involved. Two weeks ago, Noah Berlatsky considered the ethical responsibility of those who search for and view the hacked photographs. Last week, Nikki Williams argued that leak victims misjudged the expectation of privacy that exists in the cloud. This week, Mary McCarthy addresses the role of Apple in this scandal.

 

Starting August 31, 2014, the celebrity nude photo scandal dominated news headlines after a post on 4chan leaked a large cache of private pictures of celebrities. The photos were quickly shared on social media sites, most notably Twitter. In one of the early reports on the scandal, Gawker reported, “Posters on 4chan and Reddit claimed that the celebrities were hacked through their iCloud accounts, though that hasn't been verified, and the method is unclear.” Details were sketchy when the story first broke. Although the photos were removed rather quickly after threats of lawsuits, the viral cycle of social media sharing had already done the damage and the photos were spread widely on the Internet.

Initially, “the Cloud” was blamed for the invasion of Hollywood privacy; the finger-pointing at Apple was a knee-jerk Orwellian response, an Eggers-esque convenience for the masses who clearly don’t understand the technology involved. In fact, no one was as quick to blame the hackers, as they were to blame the devices. But is Apple really responsible for the damage caused by the hackings? PC Magazine reported that security experts were theorizing celebrities were hacked while accessing an open Wi-Fi network at the Emmy Awards event, making their usernames and passwords more vulnerable to an attack.

So Apple didn’t waste time responding. In a statement three days after the photos were published, the company said:

“We wanted to provide an update to our investigation into the theft of photos of certain celebrities. When we learned of the theft, we were outraged and immediately mobilized Apple’s engineers to discover the source… we have discovered that certain celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions, a practice that has become all too common on the Internet. None of the cases we have investigated has resulted from any breach in any of Apple’s systems including iCloud or Find my iPhone. We are continuing to work with law enforcement to help identify the criminals involved.”

But the scandal came at a bad time for Apple. With the new iPhone 6 scheduled for a splashy release only a week later, Apple couldn’t take a chance customers would cancel pre-orders for expensive new devices. Just a day before the September 9, 2014 product release announcement, and only ten days after the celebrity photo scandal, Apple added a layer of iCloud security to its devices.

According to Macrumors, Apple had begun sending out email alerts when personal iCloud services were accessed from Internet browsers, so users could be notified when someone unauthorized tried to access their accounts. It seemed clear that Apple was making frantic efforts to pay attention to not only the security of its iCloud services, but also to the public’s level of confidence in them.

In another public relations outreach resulting from the celebrity photo hacks, Apple CEO Tim Cook gave an interview that appeared in the Wall Street Journal of September 5, 2014 to tout the new security measures. He was clearly in defense mode in agreeing to the interview, since typically Apple releases all its own news. Walking the line between blaming celebrities and ensuring Apple wasn’t blamed, Cook made it clear that the breach was a result of hackers obtaining user IDs and passwords, and not from any security failure on the part of Apple and its servers.

Although Apple apparently wasn’t to blame directly for the leaks, it took the road of ensuring and even adding a layer of security for their customers. While Cook noted that Apple added Touch ID fingerprinting password technology when the iPhone 5S came out, he emphasized that the company continues to improve security, requiring multiple layers of sign-in authorizations. Cook stressed that the key issue the company planned to address revolved more around the personal or “human” measures versus simply the technological implications. He admitted the company should be held accountable from the standpoint of making information available to customers so they are aware of how easy it can be for hackers to attack their accounts if not properly protected by excellent passwords.

“When I step back from this terrible scenario that happened and say what more could we have done, I think about the awareness piece,” He said. “I think we have a responsibility to ratchet that up. That's not really an engineering thing.”

Apple emphasized to a number of media outlets that the company continues to work with law enforcement agencies to identify the hackers and their methods for obtaining the private data. A new round of naked celebrity photos was released on September 21, 2014 in what had evolved into an ongoing FBI investigation after the first breach. There has been little news reported regarding the discovery of the hackers who initiated the large-scale hacking, with no major follow-up articles on case progress or results, and there doesn’t seem to be a clear end to the release of additional photos.

Apple is correct in acknowledging that it has a responsibility not just to protect celebrity nudes, but to distribute information so that all customers know the importance of password protections. It may be the bright side to the hackings: the public recognition that “the Cloud” needs to do more to protect privacy. The public has a right to know that the devices on which it spends hundreds and sometimes thousands of dollars in a given year are safe from being violated by savvy digital predators. Children use these devices. Adults should be able to use them in whatever way they wish without concern for privacy invasion. It’s reminiscent of the Spiderman philosophy: “With great power comes great responsibility.” Apple can and should do everything in its power to ensure the privacy of its loyal consumers.

 

Mary T McCarthy

Mary McCarthy is Senior Editor at SpliceToday.com and the creator of pajamasandcoffee.com. She has been a professional writer for over 20 years for newspapers, magazines, and the Internet. She teaches classes at The Writer’s Center in Bethesda, Maryland and guest-lectures at the University of Maryland’s Philip Merrill College of Journalism. Her first novel The Scarlet Letter Society debuted this year and her second novel releases in 2015.

Codes of Ethics for Online Reviewing

 

The Internet has made reviewers of us all. Every positive or negative consumer experience can be shared immediately. The law of averages suggests that in the long run these reviews, when taken together, will provide an accurate reflection of consumer experiences. However, that does not absolve individual reviewers of certain ethical standards and obligations.

With 85 percent of consumers reading Yelp, reviewers have an obligation to be honest, disclosing any bias or conflict of interest. Online reviewers may not be bound by the Association of Food Journalists’ code of ethics, which states that writers must use their real names, fact-check their info, and visit a restaurant multiple times—or even the Food Blog Code of Ethics, but online reviewers should adhere to the following ethical standards:

  1. Disclose clearlyif you received payment, freebies, or other compensation
  2. Don’t praise a business if you’re personally connected

The FTC requires bloggers to disclose any compensation. To not do so is breaking the lawbut, interpretations of that vary. One amateur fashion blogger I follow indicates clothes were gifts with a small “c/o [brand]” at the end of the post in question. But “c/o” might not be understood by casual readers to mean, “They sent me this, so I’m giving them free publicity.” I know a blogger’s gotta eat, but reviewers should go out of their way to be transparent. Even if you technically aren’t breaking any laws, being shady about free stuff is a good way to ruin your reputation and lose readers.

Yelp reviewers should adhere to the same standard of disclosure, but sometimes the lines are blurry. Yelp Elite members, for instance, regularly are invited to restaurants and bars for free tastings. I’m one such member, but I’ve never gone because it felt underhanded to me – “There’s no such thing as a free lunch” and all that. At my first event last month, I asked more experienced Yelpers if we were, indeed, expected to write glowing reviews in return for free mini-pies and cocktails (even though that had never been explicitly stated), and the sentiment was yes. A simple: “I ate for free, thanks to a Yelp event,” in a review would probably suffice, but it still made me uncomfortable.

Whole Foods’ CEO, John Mackey, caught a lot of flak in 2007 when it was revealed that he’d been extoling Whole Foods on Internet forums for the past eight years under the name “Rahodeb” (an anagram of his wife Deborah’s name). The New York Times reported that Mackey wrote more than 1,100 posts on Yahoo Finance’s online bulletin board, many gushing about Whole Foods’ stock prices. (He even complimented his own appearance when another commenter insulted it, writing, “I like Mackey’s haircut. I think he looks cute!”)

An NBC broadcast at the time mentioned the Securities Exchange Act of 1934, section 10 of which prohibits “fraud, manipulation, or insider trading.” Mackey’s actions certainly seem to have been manipulative. As former SEC chair Harvey Pitt told NBC, “There’s nothing per se illegal about [Mackey’s actions], but it’s very clear to me, if you’re not willing to do it under your own name, you really have to ask why you’re doing it at all.”

Mackey later defended himself by saying, “I never intended any of those postings to be identified with me.” Well, obviously. Yet the truth came out, making Mackey look dishonest, untrustworthy, and morally suspect. Not only was it unethical, but it also tainted the Whole Foods brand. Translation: If your sister’s vintage boutique needs more customers, don’t write her a fake Yelp review.

  1. Similarly, don’t smear a brand or restaurant out of spite
  2. Realize the ramifications your review could have

Mackey also dissed Whole Foods’ then-competitor Wild Oats in 2005, writing this on a Yahoo stock message board as Rahodeb:

Would Whole Foods buy OATS [Wild Oats’ stock symbol]? Almost surely not at current prices. What would they gain? OATS locations are too small...[Wild Oats management] clearly doesn’t know what it is doing...OATS has no value and no future.

Only two years later, Whole Foods was buying Wild Oats for $565 million in a merger challenged by the FTC. Was Mackey trying to drive people away from Wild Oats so his company could buy it at a cheaper price? Only Mackey knows, but it doesn’t make him look very good.

On a personal level, a former friend of mine was bitter toward a former employer after she quit. “I’d write them a bad Yelp review, but they’d know it was me,” she said before giving me a meaningful look. “YOU should write one about how they’re so awful!” she said, only half joking, even though I’d never used the company’s services. (I didn’t.)

I can’t be the only one this has happened to. In fact, a look at any business’ filtered Yelp reviews – the ones you don’t see unless you go hunting for them and enter a Captcha, a type of challenge-response test used in computing to determine whether or not the user is human), shows that fake bad reviews (likely by competitors and their families) are all too common. If you’ve got beef with a company, personal or professional, don’t let it color your actual experience there. No inventing cockroaches to scare diners away from your rival café! Sure, write a negative review if you received legitimately poor service or products, but take it up with management if it’s a bigger issue.

At the risk of sounding overdramatic, these are real people here; real families’ livelihoods are at stake. In an earlier piece for the Center for Digital Ethics, Kristen Kuchar noted that the addition of a single star in a Yelp rating could potentially boost business by 5 to 9 percent, according to Harvard Business School. Should one irate customer have the power to get a waiter fired or put a bistro out of business? Your words can live on even if you delete your review, and you never know the impact they could have on someone down the road.

The consumer mindset means we think opening our wallets entitles us to the royal treatment, but remember that everyone has bad days. If at all possible, visit a business more than once to get a more fully rounded view of the experience there. If nothing else, it’ll make your review more helpful to others.

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com

Confession Ethics

 

Do you want to hear a secret? Can I confess something to you?

It’s a rare person who can turn down such an opportunity, the chance to hear a deep, dark and — one can only hope — excruciatingly intimate revelation.

Print publishers have capitalized on this aspect of human nature. Cosmopolitan magazine promises that “readers share their most shocking stories and steamiest secrets” on its “Cosmo Confession” page, and Seventeen’s regular feature “Traumarama” says, “You’ll laugh out loud (or cringe).” In a way, “Dear Abby” advice columns, where readers divulge personal problems under pseudonyms, are cut from the same cloth.

It’s no wonder, then, that the Internet has become a mecca for people who don’t want to keep their private thoughts to themselves.

Consider the PostSecret website, which exhibits anonymous secrets mailed on artsy homemade postcards. On March 9: “My mother only sleeps with married men. I’ve lost all respect for her.” “Telling people I’m an atheist is going to be WAY harder than coming out ever was!” and “I make both our lunches every day—But I only wash myapple!” Since 2005, the project has racked up millions of secrets, even more site visits, five books and speaking tours for its creator and curator, Frank Warren.

Despite some heavy secrets throughout the years — about suicide, abortion, betrayals, you name it — the project has avoided major controversy, and there’s no public evidence of lawsuits.

For a brief time, Warren opened a comments feature on the website, but he ultimately decided to disable it. He explained his reasoning in an interview with Mediabistro:

“Some of [the comments] were very harsh and judgmental, and I didn't want people to feel like they couldn't trust me with their secrets, that the place wouldn't be safe any longer,” Warren said. A short-lived PostSecret app had the same fate, for the same reason.

Warren’s philosophy about comments is not universal. Especially when those pages reside on social networks, like Facebook, Twitter or Reddit, where comments are a defining part of the user experience.

Confession websites are popular, in particular, among college communities. A PostSecret could come from anyone in the world. But on a college confessions page, the scandalous disclosures come from people that share your location and experiences, people you might know. It’s an alluring premise.

Usually, enterprising (or maybe just nosy) students create these pages on social networks. They share a link to a Google form or a survey tool. Confessors use that link to submit their messages anonymously for the site administrator to post, thus preserving their identities. The administrator might publish every message received or selectively decide which ones to show the public.

I searched for my alma mater’s confessions page on Facebook and found a wide variety of secrets spilled in the last month. There are comical laments about campus amenities (“I find it extremely frustrating that they have grapefruit in the dining hall and no serrated spoons for it”), tender pleas to humanity (“Somebody love me”), vehement statements about ethnic conflict (“All the Jews on this campus who are fighting for human rights in support of Israel need to stop being naive”) and some posts not technically confessions at all, but instead frank words of encouragement (“ya'll should be your goddamn fabulous selves”). Many covered stereotypical college topics: brief anecdotes about sex, partying, smoking pot; complaints about classes, professors and roommates. Some funny, others sad and many vulgar.

A site’s middleman — that administrator — might censor revelations that are obviously made up, or ones that are particularly derogatory. But that task is subjective. There’s no doubt that some confessions, especially ones that touch on topics like gender, race and ethnicity, will offend lots of readers. Also potentially contentious are the comments. Interestingly, on a site like Facebook, users can’t comment anonymously; their name is attached to whatever they say. But that doesn’t always make people censor their responses to confessions.

Media outlets have suggested that these sites can be venues for hurtful discourse and even cyberbullying, causes for concern for university officials. Colleges can block them from appearing on campus networks, but they can’t delete third-party websites or stop students from accessing them off the school network.

However, when minors are involved, as with high school confessions pages, schools and parents have more control. Many anti-bullying laws cover cyberbullying (sometimes phrased as “electronic” forms of bullying), usually when it targets juveniles in or around schools. These laws can give administrators grounds to ask social networks to control or remove confession sites.

According to its community standards page, Facebook will “take action on all reports of abusive behavior directed at private individuals.” The company also says it will “remove content and may escalate to law enforcement when we perceive a genuine risk of physical harm, or a direct threat to public safety.”[i]

But in general, the First Amendment protects students contributing to confession pages. Colleges may worry that these websites are bad for their brand, but they can’t stop students from writing about their escapades and controversial opinions in the public sphere.

Contributors do have to be careful about identifying others in their confessions or comments. It’s a guideline many of these sites establish outright. It’s a smart move from a legal standpoint.

Say someone reveals a secret about a specific person. If the statement can be proven defamatory — false and damaging to one’s reputation — then the victim could sue for libel. Even if the statement is technically true, if the identified person interprets it as verbal assault, harassment or intimidation, they can pursue legal action against the confessor.

A site administrator’s promise to preserve anonymity is not always a guarantee in these scenarios. Even through anonymous submission forms, investigators can sometimes trace IP addresses.

Madison Confessions, which claims to be the “largest college confessions page in the nation” explains this explicitly. Its submission form outlines several rules for its University of Wisconsin-Madison users, including “Never state specific names” and “Don’t confess about anything extremely illegal.”

The website doesn’t live on a social network; it’s privately hosted, with a lengthy terms of use agreement. Among other provisions, the contract states, “We reserve the right to disclose any information in our possession if required to do so by law or in the good faith belief that such action is necessary.”

It’s interesting to consider hypotheticals — defamation lawsuits, murder admissions — but the vast majority of submissions don’t enter such territory. Perhaps the solution to more common quandaries, such as outcry over crude secrets and offensive comments, is to ignore the sites. People who are oversensitive shouldn’t visit the sites. College administrators shouldn’t give them attention and, by extension, press coverage. It’s an over-simplified resolution, but one that could subdue many critics.

Warren, of PostSecret, has said that confessing secrets can be therapeutic; it’s a way to connect people. That connection between writers and readers is a point for ethical consideration itself. Will college students with literally life-altering confessions, like thoughts of suicide or stories about abuse, turn to these websites for a cure to their anguish? Will support from classmates through comments provide the help these confessors really need?

 

A Practical Guide to Digital Journalism Ethics

 

A Practical Guide to Digital Journalism Ethics is a collection of essays published by the Center for Digital Ethics and Policy, which was founded through the School of Communication at Loyola University Chicago in an effort to foster more dialogue, research, and guidelines regarding ethical behavior in online and digital environments. The book was edited by Don Heider, Dean of the Loyola Chicago School of Communication and founder of the Center for Digital Ethics and Policy, and John D. Thomas, former long-time editor of Playboy.com and a frequent contributor at the New York Times, Village Voice and Chicago Tribune. This book is a collection of the most pertinent and practical pieces published by the Center (digitalethics.org), designed to convey actionable knowledge to those writers and editors looking to steer an ethical path through the ever-more thorny thicket of online publishing. The essays are organized around three major themes: Professional Standards, Transparency and Privacy.

For more information or to purchase this collection of essays, click here.

http://www.amazon.com/Practical-Guide-Digital-Journalism-Ethics-ebook/dp/B00NB9…

Tweeting Private Conversations #DirtyLaundry

 

Whether you’re looking to recruit followers or itching for attention, entertaining Twitter users will get you noticed. According to Business Insider, regularly publishing interaction-provoking tweets will give your online popularity the boost you desire. And when your life is lacking in the interest department, re-creating the fight you see in a parking lot or the creepy pickup fail you hear at a supermarket is a tempting alternative. You can tweet an exchange in real time and know that, eventually, someone will hear you chirp. Unfortunately, the thrill of sharing riveting stories doesn’t always make up for accompanying doubts. After all, is it right to share, even worse, trivialize someone else’s private discussion?

This is just the sort of question raised after comedian and writer Kyle Ayers live-tweeted a breakup for his followers. Ayers treaded the fine line between insensitivity and entertainment when he plugged away on his phone, rushing to amuse his followers with the argument he was witnessing. The relationship-ending fight was low-hanging fruit, an escalating dispute between two seemingly young adults who made flighty comments and managed to fit “like” into the most serious of sentences. With a combination of eavesdropping and Twitter, Ayers was able to get a quick laugh from his online audience by posting what he heard. Like a true entertainer, he connected with his followers as he shared a relatable, emotional event, cropped it and highlighted its humor. In the process, he turned someone’s anxiety-inducing experience into a mix of Twitter sound bites that made a fuming, flustered couple sound less pained and more bumbling.

When the girl, alternately referred to as “Rachel,” asked her boyfriend whether they were going to live together, he clumsily replied, "Yeah but what is, like, living together? Like what's an apartment mean? You know what I'm saying?" And when Rachel told him that she couldn’t continue with the relationship, his response was, “Are we getting a pizza or what? I don’t mean to change the subject but are we?” The full exchange made for a good laugh if you were in need of some comedic relief — unless, of course, it was your breakup being discussed on CNN and dramatized in a video on New York Magazine’s website.

So while Ayers’ tweets weren’t egregiously mean, and he didn’t identify the couple or pinpoint the rooftop, he did share a distressing, personal moment that wasn’t his to share. And he shared it with the Internet. It’s one thing to laugh at yourself by making light of your personal problems, but this wasn’t his drama, and he didn’t let the couple decide who got to hear about it and which snippets of the breakup were publicized.

When someone tweets a conversation, the story becomes a customized art piece. In this case, Ayers turned the argument into entertainment by downplaying its seriousness and leaving out details such as hurt glances or wavering voices. Because he stuck to tweets that portrayed the argument’s cliché and flighty nature, it was easy to laugh about the situation without feeling much guilt. Readers empathized just enough to tune in, but not enough to think about the invasiveness and, well, let’s just say it, complete creepiness of tuning in, uninvited, to someone else’s drama. We’ll never know whether Rachel shed any tears or Rob felt horrible about hurting Rachel, but the tweets would not have been as funny if we did. We might not have meant any harm, but we did enjoy the couple’s dirty laundry until the final disconcerting tweet: “Thanks for following #roofbreakup.”

Ayers stands by his tweets and defends his updates by saying that he didn’t reveal much about the couple. “I didn’t live-tweet a couple’s private conversation; I live-tweeted any two people’s breakup conversation,” he writes in a CNN opinion piece. To some extent, he has a point. As far as we know, he made the whole thing up, and some of us are getting high and mighty over nothing. He is, after all, an aspiring entertainer. With so little revelation, he gave his followers what so many of them crave — being in on a dramatic secret. We got to witness universal relationship pains and observe how others dealt with their versions of our emotional roadblocks. We even got a good laugh out of it and began to doubt whether our relationship traumas, which are often more common than we think, really deserved our distress in hindsight.

But despite Ayers’ conviction that his tweets were innocent, there’s something that just doesn’t feel right about publishing personal conversations, especially ones that are not our own. As much as I enjoy the genius of scornful tweets and mock newspaper feeds, I can’t help but wonder whether it’s right to make someone feel foolish for the sake of entertainment. Legally speaking, retweeting something you overhear in a public space is typically fair game, but some compassion is in order because no one’s filter is foolproof.

Even those who eavesdrop for a living can forget that someone may always be listening. In October, Tom Matzzie, previous director of the political group MoveOn.org, was riding the Amtrak when he overheard former NSA director Michael Hayden chatting about politics and surveillance on his cellphone. It was almost too perfect. After listening in for several minutes, he began live-tweeting the conversation, pointing out that Hayden made “disparaging quotes about admin,” and that he “sounds defensive.”

Unlike most of us, Hayden had people watching out for him, and someone tipped him off about the affair. He stood up, approached Matzzie and asked him if he wanted a real interview. When Matzzie told him that he wasn’t a reporter, Hayden responded with an increasingly self-evident truth: “Everybody’s a reporter.”

Your chances of getting attention with an equally juicy story are slim, but that doesn’t mean you have to take cheap shots to come close. Users who rely on Twitter for promotional entertainment can still get personal, even laugh at others with some sensitivity if they are discerning enough to choose the right phrases and pick the right subjects. Achieving this balance is possible, but people have to be willing to pass on some easy laughs and perfect dramas.

If done right, revealing tweets may look like those of Justin Halpern, a comedian who gained moderate fame, three million followers and two book contracts after starting a Twitter page about his dad’s grumpy quotes. Even before he published his popular “Sh*t My Dad Says” (with his father’s permission), Halpern knew what was fair game, as do most people who are willing to think about it. His quotes included plenty of curse words and more than enough crassness to embarrass a 73-year-old doctor of nuclear medicine who cares about public opinion. Luckily, Halpern’s dad doesn’t care about public opinion, and the page paints him as someone who mocks the world more successfully than the world could mock him.

Halpern secretly published his dad’s comments without making him the butt of our jokes, and it made him no less interesting. He shares how his father aptly taught him to deal with visiting family members by asking, “What the [heck] makes you think Grandpa wants to sleep in the same room as you?” and how he managed to reassure him by saying, “Get married when you want. Your wedding’s just one more day in my life I can’t wear sweat pants.” Plus, we could all learn a thing or two about parenting from his little nuggets of wisdom. After all, “A parent’s only as good as their dumbest kid.”

It’s more than possible to entertain followers with private conversations, as long as you’re selective about stories and picky about your portrayal of others. That means having to miss out on some surefire hits, which is not a sacrifice all Twitter users are willing to make. It takes more effort, more talent and more willpower to say no to tweets that will grab people’s attention at the cost of others. But that is public privacy and respectable entertainment at its best.

 

Paulina Haselhorst

Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.

The Ethics of “Advertorial”

 

I was recently talking with a colleague about a design element of the Sunday New York Times Magazine. I walked over to where we keep our periodicals, grabbed the magazine and flipped through it to find the weekly full-page Q&A, which was the feature I was referring to.

But then something strange happened. After about eight or 10 pages, I realized that all I was seeing were ads and articles about high-end wristwatches. I then turned to the cover and saw that it was not the weekly Times magazine, but a "SPECIAL ADVERTISING SUPPLEMENT" devoted to expensive watches. I then put the supplement aside, found the actual Sunday magazine and showed my colleague the page I had initially been trying to find.

That Times supplement is a very slick example of what many people in journalism call "advertorial.” The Times has produced this kind of material for years (and so have many other major publishers), but in my opinion, the Times’ labeling that it is not "real" editorial has become less and less clear and prominent, which is a troubling trend.

If the Times wants to print "advertorial," i.e., content provided by advertisers, that's fine and completely above board. However, when they do it they should go to great lengths to make sure readers know exactly what the content is and who produced it

The trend is now being extended more into the digital space — some call it "custom content" and others refer to it as "native advertising." Basically, almost no one clicks on traditional banner and display online ads anymore. The trend is so prevalent the industry calls it “banner blindness.” Therefore, advertisers need new ways to get users to learn about their products and services. Thus, things like "Sponsored Blog Posts" are popping up to entice users to click on them.

Dan Greenberg, CEO of Sharethrough, a prominent builder and distributor of native advertising, defines the termas "a form of media that’s built into the actual visual design and where the ads are part of the content." For example, create a supplement that looks like the New York Times Magazine, put it in the Sunday paper and hope that readers will be more prone to read it. (It certainly worked on me.)

The New York Times recently announced that it is launching a new website redesign in early January 2014, and according to Mashable.com, it "will include content sponsored by advertisers, a concept known as native advertising." It's a slippery slope that the Atlantic's website dangerously slid down early last year.

On Jan. 30, 2013, Adweek.com ran an article titled, "After Scientology Debacle, The Atlantic Tightens Native Ad Guidelines; Sponsored content will become more prominent on the site." The article went on to explain, "A little over two weeks after The Atlantic got egg on its face over a sponsored Web post by the Church of Scientology, the media brand has issued new guidelines for so-called native advertising. … The issue — according to the outraged digerati but also by the Atlantic's own admission — was that the Atlantic violated the spirit of native advertising by giving a platform to a controversial institution that didn’t jibe with its intellectual tradition. Then it made things worse by censoring some of the negative reaction that filled up the comments stream.” One can only hope that the Times was paying attention and won't make any of the same mistakes.

But when you're running content produced by someone else on your site, mistakes are very easy to make. Which is why the Federal Trade Commission updated its original 2000 guidelines in March of last year regarding these kinds of digital ads. Interestingly, a main focus of the FTC's new edict was the ever-increasing consumption of content on smaller and smaller devices.

In short, the smaller the screen, the harder it is to properly label where content is coming from. According to the FTC press release, "If a disclosure is needed to prevent an online ad claim from being deceptive or unfair, it must be clear and conspicuous. Under the new guidance, this means advertisers should ensure that the disclosure is clear and conspicuous on all devices and platforms that consumers may use to view the ad."

I know a lot of journalists who think this kind of content-rich advertising in general should be banned. They see it as deceptive and they also believe it encroaches on the integrity of their own work.

I couldn't disagree more. If you take that argument to its illogical conclusion, then why not ban advertising altogether? Advertisements are certainly a form of content in and of themselves and they almost always have an agenda — they just do it more concisely than the new native ad/custom content trend.

Personally, I think the FTC nails the issue with advice that this kind of content needs to be labeled clearly and conspicuously. If you tell consumers up front where the content on your site is coming from and who is responsible for creating it, then they can decide whether they want to read it or watch it. Today's consumers are very savvy when it comes to interpreting marketing messages. However, they have to know for sure what the source is before they click.

Native ads and custom sponsored content aren’t going anywhere. In fact, Peter Minnium, the head of brand initiatives at the Interactive Advertising Bureau, recently told the New York Post that, “Native advertising is going to fuel the growth of digital media….” If that’s the case, then publishers risk damaging the reputation of their own original editorial content if they don’t brand and label sponsored content extremely transparently. And rebuilding a professional reputation is much more difficult than building a piece of sponsored content.

 

John D. Thomas

John Thomas, the former editor of Playboy.com, has been a frequent contributor at the New York Times, Chicago Tribune and Playboy magazine.

Quoting Social Media

 

Last November, I wrote a piece about selfies for the Atlantic. I wanted to include a couple of great selfies that I'd found on social media. But I hesitated. Was it all right to just pull the images and run them on a mainstream site? Or did I have to get permission? Are posts on social media public, and as such fair game to be quoted and referenced by journalists? Or are they private communications, which can be ethically quoted (like emails) only if you receive permission?

There's no doubt that for many purposes, social media posts are treated as public utterances. Ilana Gershon, an associate professor at Indiana University in the Department of Communication and Culture, told me via email that "I have recently been looking at court cases in which employers dismissed employees for statements posted on social media (regardless of whether employers were allowed access to a profile by employees or not), and it seems to me pretty clear that the courts consider social media a public forum in which one can quote with impunity."

Journalists and bloggers also often "quote with impunity" from social media. For example, the Tumblr "Hello there, Racists" collects racist or homophobic posts from social media accounts, naming the users in question with the express purpose of shaming them. Obviously the people who posted the offending material were not asked for permission to reprint it. This is an extreme instance, but the general practice extends to other, more mainstream publications as well. At ThinkProgress in December, blogger Alyssa Rosenberg wrote a post on allegations about singer R. Kelly's history of abuse of women. In the course of the post, Rosenberg used a series of tweets by hoodfeminism and Guardian writer Mikki Kendall. Inthe tweets, Kendall talked about being approached by Kelly when she was a high school student on Chicago's South Side. Rosenberg did not ask permission to use the tweets. She treated them as public statements, much like other published articles she quotes in her piece.

Rosenberg declined to comment for this essay, but Kendall told me by email that she has objected to having her words used without her permission. "If you're going to treat someone as a source for an article they should know & consent to it. Especially in situations that can open them up to legal or social repercussions," she told me. "Fair use is a lovely concept for re-imagining fiction, not for putting someone's personal experience with a sexual predator into your article to give it a fresh twist without their consent. While it may be legal I don't think it's particularly moral." Slate Senior Editor Emily Bazelon made a parallel point in regard to the Hello there, Racists Tumblr, arguing that many of the people being exposed and shamed are minors who face long-term employment and personal consequences. "Internet vigilantism at the expense of kids is just a terrible idea, given their youth and the evidence that their brains aren’t fully developed, especially in the impulse-control regions," Bazelon argues.

Since lots of kids manage not to spew racist filth online, the argument from brain development seems a little dicey. And, as Kendall noted in her email, there's something uncomfortable about arguing that racists have the right to privacy "given the way so many POC's (people of color) Twitter feeds are treated." Still, while their focus is somewhat different, both Kendall and Bazelon are in agreement that quoting social media posts in a mainstream venue can expose people to unacceptable and even cruel personal and social consequences. According to Gershon, "Facebook, LinkedIn and many other social platforms allow people to have privacy settings that give them an illusion of control over who might be their audiences." When people write on Twitter or Facebook, they feel like they're speaking privately, to a select group. Publishing their words (with names and contact info) can seem like printing an overheard conversation; it's taking advantage of a presumption of privacy.

But should there be that presumption of privacy? Thinking you're speaking in private and actually speaking in private are two different things. When politicians accidentally say uncomfortable things near a live mic, no one thinks it's unethical to reprint it. Admittedly, politicians are public figures, but on the other hand, social media is a kind of constant live mic and that shouldn’t surprise folks who speak on it. As writer Roxane Gay told me in an email, "If someone has an unlocked social media account, I do think it is ethical to quote them without contacting them. The words exist in the public sphere and it is magical thinking to believe that there is some kind of privacy to public statements." Gay added, "It can be so overwhelming to have your words quoted by a major media source, and then have to face a vigorous response. In an ideal world, yes, people would get a heads up but is that an ethical choice? No. It's the polite choice. It is not unethical to forego that notification."

The problem here, I think, is that there isn't any one guide to how you should treat social media accounts. Personally, I'm a writer who uses social media as part of my work — if someone quoted my Twitter account, I wouldn't expect to be notified, any more than I expect to be notified when one of my essays is quoted. From that perspective, Rosenberg quoting Kendall, a widely published author, seems like it wouldn't require notification. But then again, as Kendall says, her tweets in this instance were very personal and very controversial; she was talking about sexual harassment and a public figure. Contacting Kendall seems like it would have been, as Gay says, the polite thing to do — and I'd argue that (contra Gay) politeness and ethics here shade into one another.

Along those lines, in my article on selfies, I did in fact contact the people whose images I used. Both of those people seemed to be posting for a relatively private audience; certainly neither was expecting to have their faces show up on a mainstream news site. In addition, the article was about how I liked and admired their pictures, so I didn't want to post the pictures and then find out that I'd offended. Asking permission seemed like the right choice to make, at least in this instance.

In quoting social media, then, it seems like journalists need to think about a number of factors. Is the person being quoted a public figure? Is the person being quoted a minor or an adult? What is the content of the post, and what negative effects will quotation have on the person who posted it? On the one end, it seems clear you should be able to quote John Boehner's tweets without asking his permission. On the other, quoting the Facebook posts of minors, complete with contact information, in order to ridicule them is very hard to justify. In the middle, the ground is murkier, which means that both folks using social media and journalists quoting social media need to think carefully before they publish.

 

Noah Berlatsky

Noah Berlatsky edits the comics and culture website the Hooded Utilitarian and is a correspondent for the Atlantic. He is working on a book about the original Wonder Woman comics.

Bitcoin’s Radical Ethics of Trust

 

“The root problem with conventional currencies is all the trust that’s required to make it work.” — Satoshi Nakamoto, writing in an online forum (2009).

“The State will of course try to slow or halt the spread of this technology, citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration. Many of these concerns will be valid; crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded. An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion. Various criminal and foreign elements will be active users of CryptoNet. But this will not halt the spread of crypto anarchy.” — Timothy May, The Crypto-Anarchist Manifesto (1992).

"All currencies involve some measure of consensual hallucination, but Bitcoin … involves more than most." — The Economist (2013)

Every transaction is about trust. As people in supermarkets swipe glossy plastic debit cards to buy groceries, we assume the machine will extract the right amount from our bank accounts. We believe the bank won’t run out of money. We exchange things for paper and electronic promises, trusting they are of equal value. And when we exchange goods, services, or products for currency, we often assume this currency’s value is relatively stable — that we will be able to spend it later without a problem. When using currency, we must trust that other people will continue to believe it has worth.

These moments of trust are stacked on a foundational belief in a central financial authority, something that regulates the value of our currency. Bitcoin, the nascent digital currency, seeks to change this belief that trust in a centralized, state-sanctioned regulator. The creator expressly sought to make a currency impervious to questions of faith in the system.

But Bitcoin is about trust, even as it was originally intended to be the only currency that doesn’t require it. In fact, the premise of the currency is rooted in the notion that the state or a hierarchal organization is not necessary to govern the behavior of a group.

Bitcoin is designed as peer-to-peer, which removes the need for mediating central financial institutions. When you trade a Bitcoin for something, it goes straight from your online wallet to another person’s or group’s online wallet, with a mathematical system in place to prevent double spending. It is intended to be used just like other currencies — as a medium of exchange. It is still too new to be very liquid as an asset, but that’s changing — a swelling number of businesses accept Bitcoin. Though the price remains wildly volatile, sometimes swinging hundreds of dollars in a month, Bitcoin’s liquidity continues to grow as more merchants and organizations accept it as a form of payment. Many believe it has the potential to become a full-fledged alternative to state-backed currencies.

Although Bitcoin is designed to be used like other money, it is fundamentally a different type of currency than anything that came before it, untethered to a central issuer and anchored in a radically trusting way to cryptography and mathematics. Though its creators endeavored to sever the tie between money and the state, their creation still requires a faith in the collective.

Bitcoin was developed by a mysterious creator or group of creators called Satoshi Nakamoto, who (or which) outlined how to use it in an academic paper in 2009. “Satoshi Nakamoto” is likely a pseudonym, and the creator(s) of Bitcoin remain essentially anonymous.

In the paper on Bitcoin, Nakamoto describes Bitcoin as a currency that doesn’t require trust, a currency so rooted in cryptographic proof it doesn’t rely on the good intent of a governing institution. It’s supposed to be a tool to avoid a central authority acting in bad faith, an elegant solution to our saggy, bloated, abuse-riddled global financial system. Bitcoin trusts a decentralized network to organize the currency, instead of a centralized state. It assumes a currency ruled by a computer protocol is as workable as a currency controlled by a centralized authority. For Bitcoin to work in the long-term, it requires an enormous amount of trust, and respect of that trust, from the parties using Bitcoin.

Though it’s now a magnet for venture capitalists, Bitcoin began as an experiment nurtured by people interested in establishing an alternative economic and political system divorced from the current global capitalist ecosystem. Bitcoin’s rise in popularity corresponds to a growing skepticism about the future of state-controlled currencies. As incidents like the failure of the banking system in Cyprus make the weaknesses in the current global banking system more apparent, the need for a viable alternative has grown more apparent, leaving room for a once-niche idea to gain mainstream recognition.

Many of Bitcoin’s enthusiastic supporters characterize the use of the crypto-currency as a political act, because it offers a radical alternate to state-supported currencies, the currencies people are indoctrinated to consider specially legitimate. Cody Wilson, who told Vice he wants to develop an ethics of Bitcoin, is one such supporter. Wilson is a crypto-anarchist who values Bitcoin for its potential to allow anonymous transactions that cannot be traced by the government. Wilson is developing a digital wallet system called DarkWallet to facilitate this anonymity, since the current iteration of Bitcoin isn’t as anonymous as many crypto-anarchist and libertarian supporters would like.

Crypto-anarchism is thought to have inspired Bitcoin, particularly a piece by Timothy May called “The Crypto-Anarchist Manifesto.” Crypto-anarchism is a version of anarchism that embraces cryptology software to wrest power from the state. Anonymity is a treasured tool used to evade the state. This school of thought serves as Bitcoin’s philosophical scaffolding.

Anonymity isn’t prized by crypto-anarchists because it helps illegal activities, but that’s certainly one of the side-effects. And Bitcoin, as a decentralized cryptographic currency, is anonymous enough to have served as the primary payment system for the Silk Road, the Internet’s former largest drug market on the dark web. The Silk Road was shut down by the FBI in 2013, but not before billions of dollars worth of Bitcoin were exchanged for drugs. The association with illicit activity gave Bitcoin notoriety, which helped fuel adoption, since it raised the currency’s profile — but it shouldn’t be conflated with Bitcoin’s ethics.

Bitcoin’s affiliation with illegal web ventures like the Silk Road should not color the currency a murky moral shade; like cash, it is harder to track and thus more easy to use when you want to stay off the grid, but that says more about the spender than what he’s spending. While Bitcoin’s conception was politically motivated, as a unit of currency — an individual, intangible bitcoin — is an empty vessel devoid of viscerality; its only meaning what we confer upon it. A bitcoin is just an idea. The worth and meaning of a bitcoin is whatever we assign it. And the philosophical underpinnings of Bitcoin’s community are not about keeping the currency anonymous specifically to commit crimes; it’s about avoiding surveillance.

However, the importance the community places on avoiding surveillance has a dangerous side. Bitcoin’s ideological underpinnings belong to crypto-anarchists like Wilson, but the currency’s operational scaffolding is increasingly the property of venture capitalists, speculators and people who just want to generate capital, never mind what kind of capital it is. This co-option is a sign of Bitcoin entering the mainstream, but purists are concerned it will water the currency down. Wilson told a crowd in London that the capitalist interest could essentially neuter Bitcoin as a political currency, which is true. As I said before, despite the politics of its creator and its early adopters, Bitcoin is not an inherently political unit, and the lack of central authority was an originating feature but may not end up as a defining one if the currency continues to be adopted by large institutions and the players in the global financial market.

Perhaps if Bitcoin was primarily adopted by people with inflexible, ideologically consistent guiding principles for how to use it, the currency could be considered “ethical.” Perhaps if the mysterious Satoshi Nakamoto had regulation powers, it could become the currency some people want it to be. But Bitcoin has no regulations. It’s the ideal currency for the world’s worst people.

Bitcoin’s separation from the state — the thing that makes it a politically and pragmatically superior form of currency in the eyes of crypto-anarchists — could be what makes it dangerous for ordinary people to use. Because it is deregulated, abuses of the system can occur without penalty. Thefts of life-savings can go unpunished. So that’s a problem and it will continue to be a problem, because a lack of regulation is built into the idea of Bitcoin. And while major Bitcoin heists have thus far been relegated to places on the dark web, where there is little sympathy for the de-moneyed, as it grows in mainstream use, the utter lack of a safety net could cause devastation.

Of course, Bitcoin is hardly the wide world’s lone currency with the potential to be abused. The global financial system is rife with abuses, and perpetrators of deeply unfair practices often (one might argue, usually) get away with it. So it’s not logical to condemn Bitcoin for its potential for abuses without acknowledging that even supposedly regulated financial systems have been thoroughly rigged. However, it’s equally illogical to ignore Bitcoin’s vulnerability, or that it’s vulnerability comes part and parcel with its borderline-naive ethics of trust.

Without condemning Bitcoin for its potential to be abused, it is both possible and necessary to point out that the currency will only work if the trust users place in their fellow members of this financial ecosystem is upheld, and the utter lack of incentive to uphold it is frightening. For Bitcoin to avoid falling prey to corporate, capitalist groups that want to figure out ways to game both the mining system and other Bitcoin users, there are no regulatory safeguards. The underlying ethics of the currency’s founders, alongside Bitcoin’s crypto-anarchist support system, insists that users can govern themselves better than the state — but this techno-utopian thinking requires a deep trust that, I fear, may be breached as the currency becomes enmeshed in the current economic system.

This nearly utopian vision of a perfect currency is almost surely going to be corrupted further, either by getting co-opted by the financial system it sought to disrupt, or by being cast aside. But the crusade of Bitcoin’s founder and crypto-anarchist backers has admirable intentions, and Bitcoin may serve as a template for a more perfect workable version of currency in the future.

 

Kate Knibbs

Kate Knibbs is a writer and web culture journalist from the southwest side of Chicago. She probably spends too much time on the Internet.

Blameless: Vigilante Justice in Steubenville

 

The night of August 11, 2012, was inconsequential for most in Steubenville, Ohio. It was a warm, balmy Saturday night. With the beginning of the school year and a new football season quickly approaching, it was to be a night of celebration for high school students in Steubenville. And for many involved in the events that transpired, it was a celebratory time. But for one 16-year-old girl from Weirton, a neighboring town in West Virginia, it would be life-shattering.

This young girl was raped by two high school football players from Steubenville that night. She was intoxicated at the time of the assaults, transported unconsciously and carried by the perpetrators from party to party. She was first assaulted in the backseat of a car on the way to a witness’ home. One of the assailants, Trent Mays, vaginally penetrated the girl with his fingers after taking her shirt off and exposing her breasts, all the while his friends photographed and videotaped the incident. When they arrived at the witness’ house, Mays attempted to orally rape the girl by forcing his penis into her mouth. A second assailant Ma’lik Richmond, also penetrated the girl’s naked, unconscious body with his fingers. The girl testified that she remembered only a brief moment of the night in question, in which she was vomiting in the street. She awoke naked, confused and unaware of the violations that occurred the preceding night, but deeply worried that something horrible had happened.

Almost as disgusting as the acts themselves were the comments made via text and social networking sites such as Twitter, YouTube and Facebook. “I shoulda raped her now that everybody thinks I did,” Mays texted friends. Michael Nodianos, a former Steubenville baseball player, tweeted, “Some girls just deserve to be peed on,” in reference to the victim. This statement was retweeted by none other than Mays himself, along with several other peers. “I have no sympathy for whores,” another friend tweeted. A short video was later posted on YouTube, in which Nodianos and his friends discuss the incident in a lighthearted, jovial tone. “They raped her quicker than Mike Tyson raped that one girl,” Nondianos joked. “They peed on her. That’s how you know she’s dead, because someone pissed on her.” In another text, Mays described the 16-year-old girl as “deader than Caylee Anthony.” A photograph posted on Instagram showed the victim, seemingly unconscious, being carried by her wrists and ankles by two teenage boys.

While social media served to humiliate the victim, it also played an integral role in exposing the crimes of these men and the subsequent cover-up. It’s also clear that a large amount of national outrage has been fostered, in part due to the efforts of Anonymous, particularly KnightSec, an offshoot of the hacker collective. In December 2012, KnightSec hacked into an unaffiliated website and posted a message, demanding that the school officials and local authorities involved in the cover-up come forward and apologize for their actions. Anonymous released a subsequent video on the web, threatening to leak the names of alleged participants if these demands were not met. Deric Lostutter, one of the Anonymous hackers involved in the incident was later raided by the FBI.

Both Mays and Richmond were convicted as minors on March 17, 2013. Both were given the minimum sentences for their crimes. Richmond was released in early January due to good behavior, having served less than a year. Mays was given two years in juvenile detention on the count of his possession and dissemination of illicit pictures of the underage girl, constituting child pornography. If these sentences seem astonishingly light considering the circumstances, it's because they are. The seriousness of the rape charges in this case is further undermined by the fact that Lostutter, the very activist who leaked evidence that led to the rapists’ conviction, now faces more prison time than the rapists he helped to expose.

The alleged cover-up is still being investigated. A special grand jury was convened to determine if the coach and other school officials were involved. A series of indictments followed. Thus far, several officials have been indicted with crimes such as obstruction of justice, evidence tampering and perjury. Astonishingly, Steubenville City Schools’ superintendent Michael McVey was indicted with several chargesrelated to an entirely different rape case.

Rape is not an easy topic to write about. For the victim, it is a complete loss of control. It is a sick, horrendous violation of personhood. For the attacker, it’s a thoughtless fulfillment of the basest of sexual urges. It is a blatant lack of human recognition, a proverbial spitting in the face of everything upon which society is founded. It is indefensible. But of course, it’s more than that. Beyond the conceptual notion of rape is the actuality, an unspoken truth that one simply cannot express the horror and despair that comes along with such a traumatic, shattering event, all the more so when the victim is forced to relive the horror via social networking gossip. But that doesn’t mean we shouldn’t talk about it.

Philosopher Immanuel Kant formulated the following maxim: "Act in such a way that you treat humanity, whether in your own person or in that of another, always at the same time as an end and never merely as a means". And what is rape but the ultimate betrayal of this very idea? Rape is treating a person as a means to an end — in this instance, sexual gratification — rather than an end in and of his or herself. It sounds abstract, but it really isn’t. It is a practical, almost intuitive sense of morality that dictates we must treat one another as people, not as objects, play things for mere amusement. So why is it that I had never heard of this maxim until my sophomore year in college? Why is it that boys are not routinely instilled with this value? Instead, they are taught that might makes right, that physical prowess is the key to life. They are instilled with the idea that a woman’s place is on the sidelines, that women are secondary to men, essentially.

In the clash between privacy and justice, where is the line? To what ends can one justify breaking the law, when it is indeed the law itself that is failing our society — or perhaps the human application of law. It is, after all, a human world in which we live, full of imperfect beings with their own biases and preconceptions. That is why we have a justice system with a clear set of standards in which the accused are innocent until proven guilty. But in a town where football players are routinely held up as the masculine standard, is it any wonder that something like this occurred? When culture gives such gravitas to a mere game, heralding boys as heroes, cherishing physicality over intellect, it is a sure thing that monsters will be created. That is what’s occurring all over the country. When a town rallies behind rapists, indeed defending their beloved football players despite an overwhelming heap of evidence, it is clear that the game is rigged.

The morally relevant distinction between child and adult here is not the number of years these men have lived on this earth, but perhaps a quality of innocence distinctly lacking in them, as evidenced by their crimes. These high school football players committed heinous, adult-natured crimes and should have been tried as such. Their remorse is irrelevant. Their future football careers are completely beside the point. The damage they caused this girl cannot be undone. They should be in prison for a good portion of their lives and they should be labeled as sex offenders wherever they go for the entirety of their small existence. This is what ought to happen to rapists. Instead, one is free and the other will be incarcerated for merely another year, while a Good Samaritan faces potentially 10 years in prison for helping to shed light on their despicable acts.

A culture that condones rape is simply not worth preserving. A society that defends rapists and blames victims is one that ought to be admonished. That is a wrong that Anonymous attempted to make right in Steubenville. When the justice system fails, it creates a need for vigilantism. And let’s be clear: The justice system in Steubenville has failed tremendously. This is precisely what compelled KnightSec to act. But considering the circumstances, this ought not be considered a victory for Anonymous, but rather a small outflanking of sorts. When a convicted rapist goes free after less than a year, something is terribly wrong. But the paradigm is changing. We are taking part in an age that has the potential for moral evolution beyond rape culture, beyond the victim blaming, beyond the rape apologists. Unfortunately for the victim, it isn’t changing quickly enough. She will go about life knowing that an entire town valued athletics over her welfare. There are no words for that kind of anguish.

 

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

Twitter and Power

 

Twitter is divisive, hurtful, unrepresentative and bad for feminism. So argued feminist and freelance writer Meghan Murphy in a piece a few weeks back. Murphy explains:

"While I would never argue that feminists stay off of Twitter and do tend to believe it’s a necessary evil, of sorts, if you are in media/writing/journalism, I don’t think it’s a place for productive discourse or movement-building. I think it’s a place where intellectual laziness is encouraged, oversimplification is mandatory, posturing is de rigueur, and bullying is rewarded. I think it’s a place hateful people are drawn towards to gleefully spread their hate, mostly without repercussion. And more than half the time I feel as though I’m trapped in a shitty, American, movie-version of high school that looks more like a popularity contest than a movement to end oppression and violence against women."

Murphy goes on to say that she is routinely attacked and insulted on Twitter and her views are misrepresented. "What I’ve learned from Twitter is that it doesn’t matter what I do," she says. "My body of work doesn’t matter and my actual thoughts don’t matter. Not to those who have decided to hate me. What matters is to destroy and silence."

Murphy's arguments — that Twitter spreads disinformation, that it encourages bullying and abuse — have been echoed elsewhere (as in this article for example). The basic complaint, as in many critiques of the Internet, is that the forum and the platform encourage random people, with neither credentials nor accountability, to gossip, insult and gang up. It makes possible the cruel and arbitrary use of power.

I think this criticism is correct to some degree. Twitter can, and does, allow for abuse of power. Danielle Paradis, for example, lists a number of ways in which women are targeted for systematic harassment and abuse online, while Jill Filipovic discusses how, as a feminist online, she has been bombarded with online rape threats and harassment, leading in some cases to real-life stalking. In this context, the attacks by fellow feminists that Murphy discusses are (as she acknowledges) relatively minor, but still, they occur, and can be upsetting and painful. As someone who follows a fair number of feminists on Twitter, I've seen marginalized people with lots of followers dismiss and bully other marginalized people with fewer followers. It happens, and it shouldn't.

But does it happen only on Twitter? Or, to put it another way, is Twitter really distinguished from other forums or other venues by abuse of power? In a piece for Al Jazeera, Sarah Kendzior argues that, in fact, mainstream media is often used abusively and cruelly. She points to recent articles by Emma Keller in the Guardian, and to a follow-up piece by Keller's husband Bill in the New York Times, in which the couple attacked a woman named Lisa Bonchek Adams for writing about her experience with cancer on Twitter. She also mentions an article in Grantland in which author Caleb Hannan outed a trans woman, who subsequently committed suicide.

As Kendzior says, the Kellers and Hannan used their sanctioned platforms to hound and harass those without such platforms. Kendzior concludes, "Mainstream media cruelty is actually more dangerous, for it sanctions behaviour that, were it blogged by an unknown, would likely be written off as the irrelevant ramblings of a sociopath." She adds that "the prestige of old media gives bigoted ranting respectability. Even in the digital age, old media defines and shapes the culture, repositioning the lunatic fringe as the voice of reason."

Twitter, then, can actually provide a counter, or a check, on abuses in traditional media. The social media response to the Kellers' ignorant sneering at a cancer sufferer prompted the Guardian to remove Emma Keller's original column. Similarly, many trans women on Twitter (among other places) were able to express their anger and horror at Hannan's piece, eventually prompting the Grantland editor to issue an official apology. In the past, official mainstream media spoke alone; it was the only voice and the only power. Now, though, Twitter and the Internet give people — like cancer survivors or trans women — a chance to speak for themselves.

This isn't to say that Twitter has changed society's power dynamics in some thoroughgoing manner. Again, the platform is often used for misogynist harassment of women or for racist harassment of minorities. But it's also true that Twitter's demographics skew somewhat differently than society, or online, as whole — the forum is very popular with young African-Americans, for example. Moreover, the Twitter platform, with its followers, retweets and hashtags, tends to create micro-communities in which commenters without traditional connections or platforms can gain large and influential followings. Writers and activists Mikki Kendall and Suey Park are two people who have become important cultural commenters in part because of their presence on, and influence within, Twitter. And, not coincidentally, Kendall and Park are both women of color — a group that has traditionally had difficulty getting their voices heard or acknowledged in mainstream outlets.

Which brings us back to Meghan Murphy. As Murphy says, her work and her writing provoke a great deal of resistance on Twitter. But that's not (or not solely) because Twitter is mean or unfair or cruel. It's because Murphy’s work and writing are really controversial. Specifically, Murphy is engaged in organizing against sex-work and prostitution; she's an outspoken opponent of legalization.

On Twitter, there are lots of women who work in the sex industry. And they tend to speak up. Murphy sees this as abuse or bullying, and in some cases it may well cross that line. But it's also an instance in which people who don't necessarily get to voice their opinions, and who don't necessarily have a platform otherwise, get a chance to tell Murphy that she is misrepresenting their lives when she speaks about their experiences. Thus, Molli Desi Devadasi, an Indian sex worker, explained why she felt Murphy's work excluded her:

"So apart from the many women who experience exploitation and abuse in sex work there are also many women who successfully negotiate the various sex work environments and find sex work to be interesting and meaningful work. These women do not deny the hazards nor do they deny the experiences of those who have suffered harm and hurt. These sex working women believe that their stories and experiences should also be heard and used to inform a better understanding of the diversity of sex work. When they are told that their experiences and opinions are not useful and that they do not properly understand the dynamics of sex work some of these sex workers have become quite angry at such contrived exclusion. They are also sometimes accused of being “not representative” or being “pimps” or “men”. These dismissals are hurtful and provocative; they also suggest that when someone doesn’t exist theoretically there is a tendency to obfuscate that possibility so as to protect the theoretical canon over contradictions that challenge its validity."

Twitter is a place where Devadasi can talk (and has talked) directly to Murphy and tell her she's doing it wrong.

As a white guy on Twitter, people will sometimes tell me I'm doing it wrong as well. I follow a number of feminist women of color, and sometimes one of them will tell me I'm an idiot. Because of the way Twitter works, one person telling me I'm an idiot will occasionally escalate into some non-negligible number of people joining in to also tell me I'm an idiot. This is unpleasant. Sometimes I may even think that it's unfair; that I've been misrepresented, or that I've not been given a chance to speak my piece fully.

But the fact is, in most situations, most of the time, white guys like me get to speak our piece and then speak our piece again and then speak our piece some more. Usually, in most forums, it's women and people of color, and most definitely women of color, who are told that their experiences don't matter, and who have their words misrepresented. Twitter is a place where, not always, but sometimes, power relationships can shift, and folks who are marginalized can make others listen. That's not always comfortable. But discomfort, and especially the discomfort of those accustomed to power, isn't always a bad thing.

 

Noah Berlatsky

Noah Berlatsky edits the comics and culture website the Hooded Utilitarian and is a correspondent for the Atlantic. He is working on a book about the original Wonder Woman comics.

Bot Journalism: Who’s Writing That Piece You’re Reading?

 

Back in the heyday of the journalistic newsroom, Walter Cronkite reigned supreme as “the most trusted man in America.” Millions flocked to their television screens daily to hear him report on current events and whatever they heard, they took to heart. His compatriots in print journalism include names such as David Warsh, economic and political commentator for Forbes magazine and The Boston Globe; Anna Quindlen, the social and political commentator for The New York Times who won a Pulitzer in 1992; and Alix Freedman, who won the 1996 Pulitzer Prize for National Reporting and recently left The Wall Street Journal to become the ethics editor at Reuters. Even if these names are not familiar to you, you probably have a byline that you search for in print or online when you want news you can trust to be accurate. Over time, we form relationships with the individuals that bring us the news, relying on some more than others for their timely reporting, strict accuracy, insightful analysis or even, perhaps, their sense of humor.

Over the years, print and television journalists have enjoyed a friendly contest, each aiming to be at the forefront of news reporting by garnering more readers or viewers. But now print journalists have new competition: the writer-bots. Like the insidious threat in a sci-fi flick, these algorithm-based bot reporters are infiltrating the ranks of paper and online journalists with alarming speed. The really frightening part is that your favorite reporter could be a bot and you’ll never even know it.

Take journalist Ken Schwenke’s byline. It appears throughout the Los Angeles Times hovering over stories he didn’t write. Well, at least not technically although perhaps “technical” is precisely the word to describe a bot-written article. Mr. Schwenke has composed a special algorithm, called a “bot,” that takes available data, arranges it, then configures it for publication instantly, accurately and without his further intervention. His bot is specific to earthquake data, so when a quake or tremor occurs, the program gathers the available information, compiles it in a readable format and publishes it in the Times under Mr. Schwenke’s name, sometimes before he’s even had his morning coffee.

This kind of computerized reporting was first revealed a few years ago when a company called Narrative Science made headlines – literally. Its groundbreaking technology allowed customers to set up algorithmic newswires that would create automatic news articles from available data without the need for a flesh-and-blood writer. Initially, Narrative Science focused on sports statistics and stories, but they’ve since branched out into financial reporting, real estate writing and other industries.

Currently, Narrative Science’s technology produces a bot-driven story every 30 seconds or so. These articles are published everywhere, from highly regarded publications such as Forbes to the myriad widely known and virtually unknown outlets, some of whom are keeping their bot-story consumption on the down-low. While the company’s CTO and co-founder, Kristian Hammond, claims that robonews will not soon replace flesh-and-blood reporters, he does predict with dire certainty that a computer will win the Pulitzer Prize for writing in the next five years.

For a news agency or publisher, part of the draw of bot-based journalism is the lure of cheap writing labor. Narrative Science’s bot journalists can undercut even the most underpaid human writer. Here’s an example of one of their pieces for Builder magazine:

"New home sales dipped year-over-year in May in the New York, NY market, but the percentage decline, which was less severe than in April 2011, seemed to be signaling market improvement. There was a 7.7% decline in new home sales from a year earlier. This came after a 21.6% drop year-over-year last month.

In the 12 months ending May 2011, there were 10,711 new home sales, down from an annualized 10,789 in April.

As a percentage of overall housing sales, new home sales accounted for 11.4%. This is an increase on a percentage basis, as new home sales were 9.5% of total sales a year ago. Following a year-over-year decline last month, sales of new and existing homes also sank year-over-year in May."

While this isn’t exactly a stimulating read or even that well-written, it isn’t terrible. Include the fact that a piece like this will cost around $10 for a 500 word article, while hiring a writer from one of the biggest online content mills, Demand Studios, will set you back $7.50 to $20, with an average article costing $15, and you have a formula, or perhaps an algorithm, for success.

Mr. Hammond says Narrative Science is simply filling the need for these figure-laden accounts of news that no journalist is currently covering while freeing up the reporting staff of their clients to do more in-depth research or analyze more complex data. While this may be true, I know at least a hundred would-be journalists that would jump at the chance to score a gig writing a recap of a Big Ten basketball game, a summary of trending investment strategies or a review of a local theater performance.

I would also bet that a human journalist would be able to inject some excitement into that real estate article, above, although Hammond points out that Narrative Science’s technology has now advanced to let clients choose a “voice” for their stories, giving them a tone from anything from sardonic humor to pedantic narration. He believes so ardently in his technology’s burgeoning capabilities that he estimated that computers would write more than 90 percent of the news within the next 15 years.

And they are well on their way to that goal. The New York Timesis one of the 30-some large publishing clients – including trade publisher Hanley Wood and sports journalism site “The Big Ten Network” – that subscribe to Narrative Science’s technology for stories. Concurrently, some media outlets like the Washington Post are using robot fact-checkers to double-check their data before publication. The Post’sprogram, Truth Teller, uses voice-to-text technology to transcribe speeches and cross-check claims against a database of information. The Post’s executive producer for digital news, Cory Haik, claims the goal is to “get closer to … real time than what we have now. It’s about robots helping us to do better journalism – but still with journalists.”

While it’s true that robot fact-checkers can work much more quickly than their human counterparts, their mathematically driven methods only allow them to read data as white or black. The shades of gray – nuances of speech and infinitesimal manipulations of data by clever statisticians that are easily discerned by a human journalist are lost to them. For example, in a speech given by Bill Clinton at the Democratic National Convention in 2012, he wasn’t fibbing when he said, “In the past 29 months, our economy has produced about 4 ½ million private sector jobs.” What he did do was obscure the truth by carefully setting his threshold of data at the 29-month mark. If he’d added just a few more months, the economic growth under Obama’s management would not have had that rosy look he was going for. A robot might not be able to see through that cunning rhetoric, but a person, like FactCheck.org’s Robert Farley, did.

The notion of bot-produced journalism is a polarizing concept for writers, editors and consumers alike. Craig Silverman, a writer for Poynter, lauds journalism bots, claiming that they are doing the grunt work, leaving the context and narrative to “real” journalists. He writes with starry-eyed passion, gushing about the potential for robots to help writers through superior semantic awareness and the ability to flag inconsistencies in previous reporting.

Emily Bell, professor of professional practice at Columbia Journalism School and director of the Tow Center for Digital Journalism, echoes his thoughts, but adds:

"Journalism by numbers does not mean ceding human process to the bots. Every algorithm, however it is written, contains human, and therefore editorial, judgments. The decisions made about what data to include and exclude adds a layer of perspective to the information provided. There must be transparency and a set of editorial standards underpinning the data collection."

Ah, that’s the real issue – transparency. Is it ethical to put a human byline on a bot-generated story when the byline represents someone that readers have come to know and trust, à la Walter Cronkite? To me, it is unequivocally dishonest to publish a story by a bot under a human byline. In my estimation this amounts to nothing more than plagiarism of the worst kind, in which not only is the original author (the bot or the bot’s creator) not credited, but the consumer of the article is duped into believing that a human being has carefully researched, compiled and checked the facts in the article and will stand behind them. Who is liable for an error produced by a machine-generated story? The writer whose byline appears? His editor? The bot?

The Society of Professional Journalists publishes a code of ethics that for years has been used by thousands of journalists, and in classrooms and newsrooms as guidelines for ethical decision-making. Among its criteria is the enjoinder to clarify and explain news coverage and invite dialogue; to show compassion to those who might be adversely affected by news coverage; and to recognize that private individuals have a greater right to control information about themselves than do public figures. I am not sure if an algorithm has the capacity for compassion, the ability to invite dialogue or the cognizance of the difference between public and private figures that a human writer does, and this lack definitely puts the writer-bots outside the strictures of modern journalistic code. Add to that the fact that you can’t be certain who (or what) is standing behind that byline and you have the potential for an anarchic, and untrustworthy, approach to news-gathering and dissemination.

Perhaps the problem brought to light by journalism bots goes beyond transparency issues. The trend toward public acceptance of fill-in-the-blank, impersonal reporting is like a subtle mind-numbing disease brought on by continual exposure to the cult of instant gratification perpetuated by the digital landscape. Could the fact that we’ve become so inured to snippets of brief, emotionless data make it easy for these bots to be successful in reproducing (and stealthily replacing) the stories of their journalistic human counterparts? Are our own standards of compelling, telling journalism being compromised to get more hits and claim a higher position in the search engine hierarchy? Are we losing appreciation for long-form content that requires immersion, thoughtful consideration and analysis?

Ken Schwenke was on to something when he blithely admitted that many people would never even pick up on the fact that they are reading robot-driven content, but inadvertently he has touched upon the real problem behind robonews. We are entering a new era of reporting where you can no longer rely on a flesh-and-blood journalist’s ethics, honesty and integrity. In fact, you can’t rely on the authenticity of the byline at all since at any given time you could be reading the musings of an algorithm-based writer-bot rather than a journalist you know and trust. Rest in peace, Walter, rest in peace.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Are You Facebook's Guinea Pig?

 

Ever started typing a Facebook status update, only to think better of it and hit backspace? I know I have. Facebook knows, too, because the social media monolith recently analyzed these “aborted” status updates from nearly 4 million of its users, then published a research studyabout what we’re doing — or not doing, to be more precise. That’s right: Facebook may be using you for research without your explicit consent.

“Why u mad?” Facebook might respond in Internet parlance. After all, its researchers used the data anonymously, and you opted in when you signed up … kind of. Facebook’s Data Use Policy— a 9,000-word behemoth you probably never read when you joined — says, "We receive data about you whenever you use or are running Facebook," and that information could be used for "internal operations, including troubleshooting, data analysis, testing, research and service improvement."

But anybody who’s taken Psych 101 knows the first step of a research study is obtaining informed consent from participants. Burying this wording deep in a document most users don’t even skim is ethically questionable, especially at a time like this, when Americans are particularly protective of their privacy. According to a new poll by the Associated Press and GfK, Germany's largest market research institute, “61 percent [of respondents] said they prioritize protecting Americans’ rights and freedoms over making sure Americans are safe from terrorists,” the Boston Globe wrote recently. (This is up 2 percent from a similar poll five months ago.) And a research study on Internet user behavior is a far cry from spying on Americans to keep the nation safe.

So in the midst of a seemingly unending assault against personal privacy, how can companies like Facebook respect privacy concerns and proceed with research in a more ethical way?

Was it unethical?

I asked Dr. Annette N. Markham, a communications professor at Aarhus University and Loyola University who has written codes of ethics for Internet research, whether she thought Facebook’s behavior was ethical.

“This is precisely the challenging question, since it points to the difference between a social network site's legal rights and ethical practices. These are not always the same,” Markham replied via email. “In this case, Facebook has the right to use whatever data they collect since the terms of service allow them access. Was it the right thing to do? The answer will vary depending on who's being asked.”

If you ask writer Sean Rintel, he’ll say no. As Rintel recently wrote in Business Spectator, “This is at best a form of passive informed consent. The MIT Technology Review proposes that consent should be both active and real time in the age of big data.” As Facebook was founded in February 2004, that gives users a decade to forget what they agreed to — hardly ethical.

Why informed consent matters

The point of informed consent, writes Markham in an email, is that study participants know what they’re getting into before the research starts. She cites the Tuskegee syphilis experiment of the 1930s-’70s as evidence that research participants can suffer if they aren’t told what’s going on. In that experiment, 128 test subjects — poor African-American men from rural Alabama — died because the U.S. Public Health Service researchers never told them they had syphilis, nor were they treated for it, even though penicillin was available. In this case, lack of informed consent literally killed people.

Although no one is going to die because they didn’t realize Facebook is using their data, the Tuskegee syphilis experiment is a good reminder of why informed consent is so important. Its goal as a first step in research “is to preserve the autonomy of the participant and give them respect,” Markham explains. “Giving them adequate information to make an informed decision was the key. Here, we can and should take Facebook to task for not being clear about the intended use of data. Their defense that the data was anonymous is not adequate since this is not the ethical obligation under dispute.”

The irony of the situation

Facebook’s research paper, “Self-Censorship on Facebook,” discusses who is most likely to erase something before posting it, and under what circumstances. For example, the researchers found that men on Facebook censor themselves more than women do, especially if the guy has mostly male friends. Part of the researchers’ motivation, it seems, was to find out how to prevent self-censorship, because that means “the SNS [social networking site] loses value from the lack of content generation.” After all, Facebook can’t profit off you if you aren’t using the site.

But the very essence of an aborted Facebook status means users didn’t want anyone to see it: not their friends and not the Facebook overlords. People have heard “The internet is forever” and know that cached versions of web pages linger on after the original content is deleted. But it seems almost cruel that even our half-formed thoughts, updates that never saw the light of day, cannot stay entirely personal. “Is passive informed consent to collect data on technical interactions sufficient when users have actively chosen not to make content socially available?” asks Business Spectator. I say no. Arguably, most people don’t realize that things they type and then erase are still recorded; I believe the blame lies with Facebook for not clarifying this. Markham seems to agree. “The question is not whether the user consented to give Facebook the right to use all data, it's whether the use of such data as keyboard actions and backspacing was clearly understood by the user to be a form of data that would be collected,” she writes via email.

“In this case, it may not be a legal as much as [an] ethical question of responsibility and fairness,” she continues. “Frankly, we should continue to question the questionable ethics of creating lengthy and complicated blanket TOS [terms of service] that cover all manner of things, when everyone knows it's common practice to simply accept these without close scrutiny.” Indeed, Facebook's latest research seems symptomatic of a culture of websites intentionally employing a long Terms of Use page that they know very few users will read. So what’s a more ethical alternative?

A better way forward

Edward Snowden opened Americans’ eyes to the fact that our data isn’t private. To be sure, some respond with blasé resignation and references to George Orwell. But others are outraged and feel even more fiercely possessive of their personal information — particularly keystrokes they thought had been erased forever. Companies like Facebook need to understand that transparency around their privacy policies will not only boost user trust and prevent scandals and backlash down the line; it’s also simply more ethical.

Moving forward, Facebook and other sites should start employing a much more explicit terms of use, as well as real-time informed consent before launching a research study. One good example is what happens when you authorize an app through Twitter. The Twitter user sees a screen very clearly spelling out what the app will and will not be able to do. For example, Twitpic will be able to post tweets for you, but it will not have access to your Twitter password. The list is short and easy to understand, and users know exactly what they’re getting into.

Rather than the arguably underhanded (ethically, if not legally) data collection process for its recent research study, Facebook should employ a similarly straightforward notification process for future studies. Select users could see a pop-up explaining that they’ve been randomly selected to take part in a study over the following two weeks, in which their data will be used completely anonymously. This at least would give users the choice of opting out. Isn’t that something everyone deserves?

As Jeffrey Rayport wrote in the MIT Technology Review, Big Data giants like Facebook should adhere to “something akin to the Golden Rule: ‘Do unto the data of others as you would have them do unto yours.’ That kind of thinking might go a long way toward creating the kind of digital world we want — and deserve.”

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.

Confession Ethics

 

Do you want to hear a secret? Can I confess something to you?

It’s a rare person who can turn down such an opportunity, the chance to hear a deep, dark and — one can only hope — excruciatingly intimate revelation.

Print publishers have capitalized on this aspect of human nature. Cosmopolitan magazine promises that “readers share their most shocking stories and steamiest secrets” on its “Cosmo Confession” page, and Seventeen’s regular feature “Traumarama” says, “You’ll laugh out loud (or cringe).” In a way, “Dear Abby” advice columns, where readers divulge personal problems under pseudonyms, are cut from the same cloth.

It’s no wonder, then, that the Internet has become a mecca for people who don’t want to keep their private thoughts to themselves.

Consider the PostSecret website, which exhibits anonymous secrets mailed on artsy homemade postcards. On March 9: “My mother only sleeps with married men. I’ve lost all respect for her.” “Telling people I’m an atheist is going to be WAY harder than coming out ever was!” and “I make both our lunches every day—But I only wash myapple!” Since 2005, the project has racked up millions of secrets, even more site visits, five books and speaking tours for its creator and curator, Frank Warren.

Despite some heavy secrets throughout the years — about suicide, abortion, betrayals, you name it — the project has avoided major controversy, and there’s no public evidence of lawsuits.

For a brief time, Warren opened a comments feature on the website, but he ultimately decided to disable it. He explained his reasoning in an interview with Mediabistro:

“Some of [the comments] were very harsh and judgmental, and I didn't want people to feel like they couldn't trust me with their secrets, that the place wouldn't be safe any longer,” Warren said. A short-lived PostSecret app had the same fate, for the same reason.

Warren’s philosophy about comments is not universal. Especially when those pages reside on social networks, like Facebook, Twitter or Reddit, where comments are a defining part of the user experience.

Confession websites are popular, in particular, among college communities. A PostSecret could come from anyone in the world. But on a college confessions page, the scandalous disclosures come from people that share your location and experiences, people you might know. It’s an alluring premise.

Usually, enterprising (or maybe just nosy) students create these pages on social networks. They share a link to a Google form or a survey tool. Confessors use that link to submit their messages anonymously for the site administrator to post, thus preserving their identities. The administrator might publish every message received or selectively decide which ones to show the public.

I searched for my alma mater’s confessions page on Facebook and found a wide variety of secrets spilled in the last month. There are comical laments about campus amenities (“I find it extremely frustrating that they have grapefruit in the dining hall and no serrated spoons for it”), tender pleas to humanity (“Somebody love me”), vehement statements about ethnic conflict (“All the Jews on this campus who are fighting for human rights in support of Israel need to stop being naive”) and some posts not technically confessions at all, but instead frank words of encouragement (“ya'll should be your goddamn fabulous selves”). Many covered stereotypical college topics: brief anecdotes about sex, partying, smoking pot; complaints about classes, professors and roommates. Some funny, others sad and many vulgar.

A site’s middleman — that administrator — might censor revelations that are obviously made up, or ones that are particularly derogatory. But that task is subjective. There’s no doubt that some confessions, especially ones that touch on topics like gender, race and ethnicity, will offend lots of readers. Also potentially contentious are the comments. Interestingly, on a site like Facebook, users can’t comment anonymously; their name is attached to whatever they say. But that doesn’t always make people censor their responses to confessions.

Media outlets have suggested that these sites can be venues for hurtful discourse and even cyberbullying, causes for concern for university officials. Colleges can block them from appearing on campus networks, but they can’t delete third-party websites or stop students from accessing them off the school network.

However, when minors are involved, as with high school confessions pages, schools and parents have more control. Many anti-bullying laws cover cyberbullying (sometimes phrased as “electronic” forms of bullying), usually when it targets juveniles in or around schools. These laws can give administrators grounds to ask social networks to control or remove confession sites.

According to its community standards page, Facebook will “take action on all reports of abusive behavior directed at private individuals.” The company also says it will “remove content and may escalate to law enforcement when we perceive a genuine risk of physical harm, or a direct threat to public safety.”[i]

But in general, the First Amendment protects students contributing to confession pages. Colleges may worry that these websites are bad for their brand, but they can’t stop students from writing about their escapades and controversial opinions in the public sphere.

Contributors do have to be careful about identifying others in their confessions or comments. It’s a guideline many of these sites establish outright. It’s a smart move from a legal standpoint.

Say someone reveals a secret about a specific person. If the statement can be proven defamatory — false and damaging to one’s reputation — then the victim could sue for libel. Even if the statement is technically true, if the identified person interprets it as verbal assault, harassment or intimidation, they can pursue legal action against the confessor.

A site administrator’s promise to preserve anonymity is not always a guarantee in these scenarios. Even through anonymous submission forms, investigators can sometimes trace IP addresses.

Madison Confessions, which claims to be the “largest college confessions page in the nation” explains this explicitly. Its submission form outlines several rules for its University of Wisconsin-Madison users, including “Never state specific names” and “Don’t confess about anything extremely illegal.”

The website doesn’t live on a social network; it’s privately hosted, with a lengthy terms of use agreement. Among other provisions, the contract states, “We reserve the right to disclose any information in our possession if required to do so by law or in the good faith belief that such action is necessary.”

It’s interesting to consider hypotheticals — defamation lawsuits, murder admissions — but the vast majority of submissions don’t enter such territory. Perhaps the solution to more common quandaries, such as outcry over crude secrets and offensive comments, is to ignore the sites. People who are oversensitive shouldn’t visit the sites. College administrators shouldn’t give them attention and, by extension, press coverage. It’s an over-simplified resolution, but one that could subdue many critics.

Warren, of PostSecret, has said that confessing secrets can be therapeutic; it’s a way to connect people. That connection between writers and readers is a point for ethical consideration itself. Will college students with literally life-altering confessions, like thoughts of suicide or stories about abuse, turn to these websites for a cure to their anguish? Will support from classmates through comments provide the help these confessors really need?

 

Mr. Brightside's Watching You

 

In the old days, suspicious spouses who wanted to keep tabs on their partners had limited options. They could rifle through pockets for cryptic receipts, check shirt collars for lipstick stains or hire private investigators to stake out seedy motels.

Today jealous lovers have a lot more strategies at their disposal. They can scroll through unmanned cellphones for call logs and text messages. They can sneak onto Facebook and email accounts.

And for the truly neurotic, highly distrustful types, there are products like mSpy to do the dirty work. The “mobile monitoring software solution” is a smartphone application that logs all of a user’s activity. It records calls, texts, emails, messages from other apps, GPS locations, website history, calendar events, address book contacts, photos, videos and every keystroke made. The software can even be set up for bugging to record conversations that go on in the phone’s presence.

This is how it works: Paranoid boyfriend buys a subscription to the app (the “premium” package costs $69.99 for one month, though rates are cheaper if you commit to a longer duration). Then he installs it on his girlfriend’s phone when she’s not looking. If that’s too tricky to pull off, the boyfriend can buy a phone preloaded with the app and give it to his girlfriend as an anniversary present. After that, the boyfriend simply logs into his online account, where he can view all of his beloved’s activity.

So how the does mSpy rationalize this ethically murky product? As noted on its legal agreement, “It is a considered federal and/or state violation of the law in most cases to install surveillance software onto a mobile phone or other device for which you do not have proper authorization and in most cases you are required to notify users of the device that they are being monitored.” This is all according to the Computer Fraud and Abuse Act (cellphones are considered computers under this law).

Then in large, blue, italicized font: “We absolutely do not endorse the use of our software for illegal purposes.”The agreement explains that to install the software on a phone, you must own the phone or receive written consent from the phone’s owner.

On the surface, the mSpy website markets the software as a tool for parents to monitor their children and for companies to monitor their employees. It also mentions the benefits of installing it on your own phone: backup, storage and protection from loss or theft.

Scattered about the website are customer testimonials from parents and business owners. There are scary stats about teen drug use, rape, suicide and cyberbullying. There are corny marketing lines, such as “Truth is a treasure worth digging for.”

While it’s easy to pinpoint the potential (but still creepy) benefits of the product’s so-called intended purposes — protecting child safety or keeping tabs on worker productivity — the company does little to dissuade the more nefarious options. Its homepage links to some of its press coverage, to headlines such as “mSpy: A terrifying app for spying on another smartphone or tablet user” and “Spy software lets you track a partner’s movements.”

Its “Buy Now” webpage has a long list of product features. At the very top is “Stealth/Undetectable: Your use of mSpy will go unnoticed. This app runs in an invisible mode so that it's impossible for the target device user to detect its presence on the phone.” But if a spy is legally required to notify the phone’s user, isn’t “stealth” a moot point? Why, then, does the company broadcast it so prominently?The media has certainly focused on the product’s criminal uses. And the company has not denied them.

“We do have quite a large portion of our customers who use mSpy specifically to catch a cheating spouse,” said employee Tatiana Ameri in an interview with ABC 22, a local TV station in Ohio.

The company also shrugs off responsibility. In an interview with Forbes, founder Andrei Shimanovich likened the situation to gun sales. “If you go out and buy a gun and go shoot someone, no one will go after the gun producer,” he said.

Of course, mSpy is just one example of a plethora of tools available to snoopers. TopSpy, Spymaster Pro and StealthGenie software offer similar services, and they, too, are officially marketed toward parents and employers. And though not as thoroughly invasive, there are also many free and legal phone apps to help out the nosy — mostly GPS trackers and schedule synchronizers.

On the flip side, apps to help a person combat espionage seem to be just as plentiful. One self-destructs text messages, another hides your photos and videos and a third blocks other apps from recording your voice.

Mutually mistrustful partners (who are both Android users) can just cut to the chase and install Couple Tracker. It sends all the usual information to both parties involved in the relationship.

Let’s not forget about social media, a notorious source for gathering intelligence on, well, anyone. A 2013 study published in the journal Cyberpsychology, Behavior, and Social Networking examined why people use Facebook to spy on their partners. Using college students, the study showed that individuals with anxious and insecure attitudes about romantic relationships are most likely to stalk their sweethearts on the social network. The authors suggest that any information found — truly disreputable or not — may only exacerbate anxieties and strain relationships. This brings up a larger question: Just because you can do something, should you?

In a 2011 survey, 35 percent of women and 30 percent of men said they have checked their spouse’s or partner’s email or call history without them knowing. For married women, the stat jumps up to 41 percent. About a third of both men and women said they would secretly track their spouse or partner using a cellphone or other electronic device. The survey, conducted by electronics shopping and review website Retrevo, sampled 1,000 people in the United States distributed across gender, age, income and location.

Digital technology certainly makes it easy to invade someone’s privacy, though the ethics behind such actions are still as shaky as they were in the lipstick and collar days. It’s also ironic during a time when many protest against government surveillance and companies like Google tracking user data.

Is Anonymous Texting Ethical?

 

Childhood as a social construct is idealized and romanticized in ways that perhaps skew our judgment toward an ugly sentiment. We say, “Ah, to be young again,” and we look back fondly on our own respective childhood memories. So it is only natural that we are disturbed when our young begin to act in unfamiliar ways. But this feeling of disillusion is based on a nostalgic misappropriation of our own memories. Because it is a stage when our minds first give birth to thoughts, when we are first given words to describe this thing we call life, we think of childhood as an innocent period, almost outside of time rather than a fixed set of points in the chronology of our lives. The sad truth is it was never all that good. We just weren’t cognizant of certain truths. That is why we look back on our childhood with fondness. We didn’t know of the dark horrors out there. We try to teach our children moral accountability, not to lie, not to be cruel. These are the rudimentary stepping-stones of an ethical framework. So what happens when that very framework is threatened? What can be more challenging to the status quo than to alter the way children think about the notion of self? When we introduce children to the idea of anonymity, for example, what grave things will come in the future?

A number of anonymous texting applications have become prevalent around the country, creating a controversy with regard to use among the nation’s youth. Yik Yak, one of the more prominent mobile apps in question, works by allowing users to view a live feed of messages posted by people within 5 to 10 miles of their location. The app was originally intended for college students, but it has since become popular among high school and even middle school students. Yik Yak encourages a no holds barred type discussion, touting a slogan that mirrors the anything-goes branding efforts of Las Vegas ad men: “What happens on Yik Yak, stays on Yik Yak.” While seeming to ensure privacy to users, the app cannot control what sort of information is divulged through its system. The anonymity granted by this application makes it ripe for potential use as a cyberbullying tool. School administrators at Lake Forest High School in Illinois claim that students are using the app to verbally abuse one another. The principal released the following statement in a message to parents:

I am writing to inform you of a mobile app that is harmful to students and to the positive school culture of Lake Forest High School. I am also writing to ask your support in addressing this serious issue. One of the hallmarks of Lake Forest High School is our supportive environment and our commitment to the well-being of one another. Collectively, we have an opportunity and responsibility to ensure to maintain our positive school climate.”

Of course, one would be hard-pressed to have ethical qualms with this principal’s efforts to harbor a positive environment at his school. And it would be difficult to make a legitimate case arguing that his efforts to suppress the use of this anonymous texting application are in any way malicious, illegal or unconstitutional, as some might argue. After all, what happens on school grounds is ultimately his administration’s responsibility. In light of such claims, Yik Yak recently announced that the company would disable service in the Chicago area. However, I would ask: What defines a positive school environment? Why is the concept of anonymity inherently negative?

The idea that one can act without accountability can be traced back to the writings of Thomas Hobbes. The state of anonymity, such as the one employed by these mobile apps, can be viewed as a new kind of state of nature, such as the one identified by Hobbes. He described his state of nature as “the war of all against all,” and indeed it is not a pleasant way of life. This just so happens to describe rather aptly strings of anonymous commentary on the Internet; a cursory glance on the YouTube comments of any popular video will tell you that much. What was Hobbes’ solution to the state of nature? Simply put, he theorizes that humanity instituted a social contract to regulate our interactions with one another. Likewise, the notion of digital citizenship very well could be a viable means of helping children understand the consequences of their actions in this digital age.

This issue of anonymity is firmly rooted in our cultural understanding of how we communicate with one another. Like the idea of childhood, the way people communicate can also be thought of as a construct. It is a construct that is constantly in flux with the ever changing nature of technology. Consider for a moment the fact that before the advent of caller ID, every single phone call was essentially ‘anonymous,’ in the exact same way these texts are. Was it a social crisis that kids were able to prank call one another with no practical threat of exposure? Certainly not. In the early days of the telephone, there wasn’t much one could do about a prank call or a heavy breather. Likewise, there was no way for a caller to pre-emptively identify him or herself. But as the social norm of caller ID became widely accepted, the notion of anonymity became taboo in this regard. “Unknown caller” turned into a menacing term. This advancement in technology manifested an ethical imperative. What was once an innovative supplement to communication technology is now the standard. Practically every cellphone employs caller ID technology. Over time, advancements in communication technology will further change our ethical landscape in ways that may be difficult to imagine.

The problem of anonymity — if it can be construed as a problem — is only going to increase as technology advances and methods of encryption exponentially abound. That said, I don’t believe we are giving children enough credit. Humans are the most adaptable species on the planet. And we are most adaptable in our childhood years. Ultimately, we are failing the youth by limiting their access to technology, just as a parent who bans literature from the house is failing their child in integral ways. An ethical framework is only useful insofar as its ability to serve its function; for example, to regulate society. Once that framework becomes ineffective for this purpose, it’s time to rethink its inner workings.

Under the shadowy visage of anonymity, there is no personal accountability. But this is neither good nor bad. The methods employed by bullies, be they cyber or otherwise, are hardly irrelevant to the matter at hand. Is it any more harmful that an anonymous user calls a little girl fat, rather than, say, a mean little boy staring her right in the face? The effect is assumed to be the same. So what’s the morally relevant distinction? Is it because in one scenario we have the clear ability to place blame, while in the other, blame is nebulous? The issue is personal accountability.

But when a person displays behavior similar to that of a bully, that behavior is likely caused by more deeply seated issues. Perhaps it’s a series of trauma that makes the bully more apt to lash out. Or maybe it’s a particular combination of genetic codes that makes the bully predisposed to aggressive behavior. It very well could be a combination of both, but the point is that personal accountability is not precisely relevant. In other words, it’s not the bully’s fault. The bully is a child. He or she has little agency in such behavior. The child’s behavior is a symptom of a larger problem. Instead of playing the blame game, it seems more prudent to address the root causes of what we perceive as bullying. What drives people to cruelty? This is the question we should be asking.

A culture of anonymity demands that material be judged based on the merit of its content, rather than the identity of the speaker. There is certainly a draw to that idea, but perhaps it is a paradigm too obtuse for schoolchildren. Or perhaps that is what school administrators want you to think. Because when content is judged based on its own merit, rather than an appeal to some authority, the content itself must be compelling. And when challenged, school administrators often don’t have much to say other than, “Because I said so” or “Because that’s the way it is.” Instead of opening the hearts and minds of students, administrators are so quick to give in to the mediocrity of the situation and defer shallowly to their own authority. Under the guise of a “positive school environment,” they displace accountability in the same way as anonymous texting does. That is not a message we should perpetuate to formative minds.

The concept of anonymity is inherently radical because our entire society is based on the notion of self (i.e., that our actions are our own) caused by our conscious decisions. But is it all that clear? We presume one ought to be held entirely accountable for one’s own words and actions. Society is hinged upon that kind of personal responsibility. But children are more perceptive than we would like to think. They can and regularly do recognize hypocrisy. We propagate this notion of accountability to our young in an act of willful ignorance, all the while turning a blind eye to clear injustices in the adult world. This is the dissonance that helps to create bullies. What is a child to believe in face of that kind of hypocrisy?

It seems that the focus here is on the technology itself, rather than the students’ behavior. This is perhaps because the prevalence of a nefarious new app is far more easily addressed than the issue of bullying as a whole. So why not face it head on, rather than take concern with the fleeting trends of mobile apps? Instead of addressing the fundamental issue, school administrators have simply attempted to institute a prohibition of sorts on this new application. This is nothing more than laziness and intellectual cowardice. And it seems that in some cases, businesses are complying with this mandate. But we all know how well prohibition works with respect to social control. Administrators even freely admit that they cannot control what students do on their own phone networks. We can use the guise of anonymity for good just as it is used for evil. Instead of discouraging the app’s use among students, what if administrators encouraged a more thoughtful discourse through the app itself? Indeed, isn’t this a prime opportunity to teach kids an important life lesson? We should be telling them to call out thoughtless cruelty and refuse to reciprocate in the same manner.

Instead of shunning this new mobile app because of its potential for wrongdoing, perhaps school administrators ought to ask students a more thought-provoking question: What is to be gained from anonymity? With every new advancement in technology, there is an opportunity to learn something new about human nature. Sure, one can throw out a needlessly cruel message from across the hall and it will be a drop in the ocean of cruel messages spouted since the beginning of humanity. It’s a lesson students would quickly learn once exposed to this new avenue of communication technology — that one can find ways to transcend the noise. And therein lies the key to survival in the 21st century.

 

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

Effects of Graphic Photo Use in Social Media

 

In social media, we are barraged daily by hundreds if not thousands of photographic images. Facebook and Twitter feeds show not only lunch plates and newborn nephews, but “selfies” taken to display a new tattoo, change of hair color or, in some cases, scars from a recent surgery. Instagram, as a social medium (owned by Facebook) serves more specifically as a photographic stream and, therefore, potentially documents personal experiences in an even more graphically representative way.

Users of social media choose to post graphic images for different reasons. Drawing attention to an illness or social issue is a very different motivation from posting an image because of its sexually explicit nature, yet can these two choices have the same outcome of potentially being banned from social networking sites? How has social media changed policy determinations over time to reflect changing moral systems and how do these policies affect users?

Unfriended For Showing Scars

In the recent Salon article, “Unfriended for Showing Her Scars: A ‘breast cancer preventer’ opens up,” Beth Whaanga posted seminude photos of her body after total bilateral mastectomy and total hysterectomy.

“Within hours of the photos going up, they’d been reported to Facebook. Encouragingly, Facebook has said it will not remove the images, but Whaanga meanwhile says she has been unfriended more than a hundred times. Over a hundred people looked at her and walked away.”

Although Facebook did not ban the woman’s photo, it was her own social network that chose to unfriend her as a result of the images. But she found support as well: Her “Under the Red Dress” Facebook page has nearly 60,000 likes and her message “your scars aren’t ugly; they mean you’re alive” resonates on her website.

How Facebook Handles Graphic Content

It is only more recently that Facebook has embraced allowing graphic scar photos. They encountered a controversy in their initial ban of photographer David Jay’s The Scar Project (slogan: “breast cancer is not a pink ribbon”). NBC reported that a change.org petition prompted the account’s reinstatement, noting that Facebook spokeswoman Alison Schumer said Facebook has "long allowed mastectomy photos to be shared on Facebook, as well as educational and scientific photos of the human body and photos of women breastfeeding."

"We only review or remove photos after they have been reported to us by people who see the images in their News Feeds or otherwise discover them," Schumer told NBC. "On occasion, we may remove a photo showing mastectomy scarring either by mistake, as our teams review millions of pieces of content daily, or because a photo has violated our terms for other reasons."

Self-harming scars are another matter entirely. Unfortunately there are a number of websites and blogs that glorify teen self-harm. Facebook expressly discourages any self-harm content, stating in its policy:

“Facebook takes threats of self-harm very seriously. We remove any promotion or encouragement of self-mutilation, eating disorders or hard drug abuse. We also work with suicide prevention agencies around the world to provide assistance for people in distress.”

Facebook also initially banned the account of a woman using a different kind of graphic image to tell a story. When D. M. Murdock posted a disturbing image of African girls undergoing “virginity testing” in order to raise awareness for child abuse, her Facebook account was disabled.

“I posted the uncensored, shocking photo on Facebook because it is important to see the utter indignity these poor girls must suffer – this horrible abuse is now being done in the West,” Murdoch wrote. “How can we battle it, if we can’t see what it is?”

Eventually Facebook reversed the permanent ban decision after a social media campaign and petition were undertaken asking for reinstatement of the account.

Facebook’s community standards are clear on its policy of dealing with graphic images:

“Facebook has long been a place where people turn to share their experiences and raise awareness about issues important to them. Sometimes, those experiences and issues involve graphic content that is of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, it is to condemn it. However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site."

When people share any content, we expect that they will share in a responsible manner. That includes choosing carefully the audience for the content. For graphic videos, people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it.”

Are there times when trigger warnings are necessary for graphic image content on social media sites? On Twitter, the hash tag #triggerwarning is used to warn users of content that might be sensitive to readers. Author Noah Berlatsky addresses the issue in his piece “Does This Post Need a Trigger Warning?”

I can see the virtues in the arguments both for and against trigger warnings — and partially as a result, I feel like both may be framed in overly absolutist terms. To me, it seems like it might be better to think about trigger warnings not as a moral imperative but rather as a community norm. Warnings exist, after all, in dialogue with reader expectations. On some forums, people may expect to be warned about difficult content.”

Trigger warnings are a good idea to use on social media because of the public way in which content is being shared. When an author is writing on a sensitive topic—whether it’s suicide, eating disorders, or violence, it’s certainly considered professional and courteous to warn readers in advance that they might find the material sensitive.

When Images Are Intentionally Graphic

Overly sexual images that are posted on Facebook and its child company Instagram are screened and deleted quickly by the social networking site; Twitter allows more pornographic images.

Although these are examples of graphic imagery being used to tell a story and misinterpreted as inappropriate by social media sites, what about the effects of graphic images that have every intention and awareness of being inappropriate?

Bans on porn vary by social media site. Instagram and mother Facebook are very clearly on the side of family-friendly: Facebook lowered the minimum age of holding an account from 14 to 13.

Facebook’s nudity and pornography policy is zero-tolerance; its standards note “a strict policy against the sharing of pornographic content and any explicitly sexual content where a minor is involved. We also impose limitations on the display of nudity. We aspire to respect people’s right to share content of personal importance, whether those are photos of a sculpture like Michelangelo's David or family photos of a child breastfeeding.”

Video social networking site Vine and its parent company Twitter have always taken a more relaxed approach to adult content, though things heated up recently when both social media sites became embroiled in a teen pornography controversy that prompted complete policy change.

CNN asked in a report “Does Twitter’s Vine Have a Porn Problem?”

“On Twitter's end, the anything-goes aspect of Vine jibes with the site's overall philosophy. Compared to Facebook, which believes social sharing is best when tied to a user's true identity and real-world networks, Twitter allows its users to register under fake names and has fought governments and law-enforcement agencies seeking user information. As such, Twitter has taken a more hands-off approach on adult content. It's not hard to hunt down hash tags its users are employing to share adult content on a daily basis. (#TwitterAfterDark becomes a trending topic on the site nearly every day — clicker beware).”

CNN’s “porn problem” question was answered not long after, when an anonymous teen posted a video of himself having sex with a Hot Pocket snack wrap. In its follow-up piece, “Twitter Bans Porn Videos on Vine” CNN reported that Apple (which holds the key to the download of an app) had forced Vine to change its minimum age from 12 to 17, ultimately causing Vine’s parent company, Twitter, to ban porn on the video site. The company’s announcement stated, “We’re making an update to our Rules and Terms of Service to prohibit explicit sexual content. For more than 99 percent of our users, this doesn’t really change anything. For the rest: we don’t have a problem with explicit sexual content on the Internet –– we just prefer not to be the source of it.”The changes at the major social media sites beg the question: Who are the Internet moral police? Whose job is it to review millions of pieces of content that hit the Web day after day and try to make determinations about what is appropriate? Policy changes after petitions and forced age minimums show that the sites are listening to public outcry resulting from a wide diversity in moral standards.

Whether the effect of posting a graphic image to a social media site is positive, creating an impetus for social change or negative (for example, causing a young person to be denied a job because of his or her Facebook profile photos), it’s clear that posting polarizing images has the potential to have an impact beyond the social media profile of an individual.

 

Mary T McCarthy

Mary McCarthy is Senior Editor at SpliceToday.com and the creator of pajamasandcoffee.com. She has been a professional writer for over 20 years for newspapers, magazines, and the Internet. She teaches classes at The Writer’s Center in Bethesda, Maryland and guest-lectures at the University of Maryland’s Philip Merrill College of Journalism. Her first novel The Scarlet Letter Society debuted this year and her second novel releases in 2015.

Ethical Pros and Cons of the Darknet

 

In a remote and dangerous region of the Internet dwells another lesser-known net, a lawless digital no-man's-land with a shady reputation and an ominous name: The Darknet.

Here on the virtually anything-goes Darknet, with its guaranteed anonymity, criminals and scammers of one stripe or another, anti-government rebels, revolutionaries, terrorists and other varieties of outlaw can hawk their goods and services, communicate with visitors and customers to their sites and with each other, and operate anonymously and thus presumably with impunity.

But the Darknet, paradoxically, is also like a Dr. Jekyll and Mr. Hyde, with its share of good qualities as well, including its use by law enforcement agencies to conceal their own legal activities and as a medium for writing and reading what might otherwise be censored in authoritarian states. So the Darknet is not entirely evil and beyond redemption.

But first, let's take a look at the darkest side of the Darknet.

Major crimes committed through the assistance of the Darknet include the sales of guns of every variety and caliber, no license required, no questions asked. Buyers favor assault weapons such as the AK-47 assault rifle, but new and stolen handguns also sell briskly.

A notorious Darknet site called Silk Road was alleged to be a virtual supermarket of narcotics, where traffickers and users could buy their drugs of choice in small amounts or wholesale lots. The FBI took down the site, but soon afterward a site called Silk Road 2.0 appeared and reportedly resumed business as usual.

Counterfeit U.S. currency is also sold on the Darknet – especially 20 and 100 dollar bills, peddled at prices discounted from their face value.

Also much in demand from Darknet entrepreneurs are phony passports, birth certificates, forged or bogus documents and stolen credit cards.

Prostitutes advertise openly on the Darknet, usually with photographs, although some of the photos may not be pictures of the person advertising. But to whom can a swindled customer complain?

Another category of criminal enterprise found on the Darknet is the professional assassin. Murder-for-hire services are boldly advertised, and although there are no reliable figures on the number of Darknet-related contract killings, there are reputedly no shortage of customers.

Some self-proclaimed assassins of the Darknet you might ironically say are unethical because they take a customer's money and never deliver the promised fatal result.

Other Darknet "hit men" are police conducting sting operations, many of which have been successful in preventing homicides and apprehending potential murderers.

Asian customers of the Darknet have a predilection for animal parts of certain endangered species, which are readily available for a price. Annual sales in this illegal market have been estimated at $20 billion.

Powdered rhinoceros horn is a favorite of Chinese and Vietnamese buyers, although it is popular in many other countries as well. Rhino horn is mistakenly reputed to have mystical, psychoactive and various medicinal properties. A single rhino horn can sell for as much as $500,000 and a kilo of the horn in powdered form sells for as much as $90,000.

Other animal parts offered for sale include elephant tusks, which are pure ivory and sell for high prices on the black market. Tiger parts reportedly also sell well, as do exotic animals, protected by law.

Criminal operators on the Darknet protect their identity through encryption – every Darknet dweller, as mentioned, is masked in anonymity and nobody knows anybody, unless a name is deliberately revealed. Outsiders or third parties cannot access communications between two parties. Sellers and buyers of Darknet contraband and illegal services are therefore impossible to trace.

Additional secrecy is provided by hidden servers and IPs and by URLs whose sites often disappear randomly and unpredictably.

With all these security measures it's easy to see how criminal activity on the Darknet is rampant and for the most part unpunished.

But the same elements that protect felonious activities on the Darknet can also be used for positive purposes.

Political dissidents, for example, who are denied freedom of speech by their dictatorial governments may publish anti-government editorials or messages on rogue sites and leave nary a trace of their identities. There is no government-imposed censorship on the Darknet and so anyone can say anything without fear of retribution.

Also cloaked anonymously in the Darknet's shadows are whistleblowers who can tell news sources their tales of public or private sector misconduct, mismanagement, malfeasance or worse and be assured that they won’t be tagged as the snitch. Information leakers, likewise, can disclose proprietary or classified information and their identities would never be known.

Darknet "hacktivist" vigilantes also roam the territory in pursuit of villains and villainous sites. Recently a hacktivist operation took down a child porn site – frontier justice cyber-style, you might say. The destructive hacking of the site was perhaps unethical and maybe illegal, because there was no trial by jury.

It's not difficult to get to the Darknet. Google “TOR Project” and the following link will appear: https://www.torproject.org/index.html.en.

The TOR Project site – TOR is short for The Onion Router – is home to a network that allows anonymous browsing of both the mainstream Internet and Darknet. TOR also hosts a directory of Darknet sites, most of which have the suffix .onion as the address extension.

Be forewarned, however, visiting the Darknet is definitely hazardous and all due security precautions should be taken. Don't share personal information and don't use passwords you use on the Internet. It's also not wise to click on links and, of course, don't give out credit card or Social Security numbers.

The U.S. government just recently announced that it would give up oversight of the Internet in 2015.

But after the administration was urged not to give up Internet control by Bill Clinton, by 35 Republican senators who sent a letter to that effect to the President, and as a result of Congressional hearings, the administration changed its mind and will retain oversight of the net.

The U.S. currently controls of The Internet Corporation for Assigned Names and Numbers, also known as Icann, which assigns domain names and web addresses and keeps Net operations running smoothly, openly and free of domestic and foreign political pressure.

If the planned relinquishment of Internet oversight were implemented, the U.S. would hand over control to a group of global companies, nonprofits and academics. Opponents of the move predicted that repressive governments such as Russia, China and North Korea – there are many others as well – would increase their authoritarian control of the Internet and step up their already strict censorship.

Without U.S. control, countries with strong restrictions on the Internet would not permit Darknet access. But where the Internet would be open and free of restraints, the Darknet – if it continued to exist – would likely flourish. Why? Because crime is a market-driven enterprise and people will want to speak, write and read freely, without censorship of any kind. This is the great paradox of the Darknet.

 

Marc Davis

Marc Davis is a veteran journalist and published novelist. His reporting and writing has been published in numerous print and online publications including AOL, The Chicago Tribune, Forbes Online Media, The Journal of the American Bar Association, and many others. His latest novel, Bottom Line, was published in 2013.

Through the Google Glass: The Ethics of Wearable Computers

 

 

Maybe you’ve seen them around town – those bold Glass Explorers whose easily recognizable high-tech eyewear distinguishes them from the rest of us.

While most would just pass by, there is a segment of the population that wonders and may even worry about what that Glass wearer is doing with his or her device. Are they recording us? Are they taking a picture? Can they see through our clothing like some sort of modern-day version of a comic book’s X-ray glasses?

Concerns about what wearable computers may or may not see has sparked debate, pages of online commentary and even a few notable physical confrontations, particularly along the West Coast in cities including Seattle, San Diego and San Francisco. While Google Glass has been around for about two years, it seems the conversation around it will go on long into the future.

For newbies, Google Glass is eyewear that includes a wearable computer. The Glass unit is attached to what appears to be a basic eyeglass frame (although Google recently upgraded these frames through a partnership with Luxottica, the owners of the Ray-Ban brand). Glass can be attached on either of the frame. Wearers can then communicate with the Internet via voice commands, and their requested information is shown on an “optical head-mounted display” or OHMD.

A Google division created Glass in hopes of creating a device that could gain widespread popularity. The company’s early prototypes were made public in 2012. The company’s beta testers, who are known as “Explorers,” were able to purchase a unit in 2013.

Glass allow the user to doing everything from take pictures with its camera to shooting videos to receiving directions to checking emails. Early users praised the products in the media; even “Time” magazine named it one of its “Best Inventions of the Year 2012.”

With any new technology, there are always tensions over how it should be used. Google Glass has been among one of the more controversial introductions, something observers say may be the result of its limited availability and, in some cases, jealousy over who was among the first “Explorers” to try it out. The next “open” public offering for purchasing Glass came April 15, and its available units sold out quickly, Google said via its social media sites. (This bundle costs $1,500 plus tax and includes Glass, charger, pouch, mono ear bud and your choice of a shade or a frame for no additional charge.)

But, Glass proponents and detractors agree, with smartphones, wearable computers and who knows what coming down the pipeline, the discussion about whether taking pictures or video of people without their consent is a worthy one. As the general public gains access to such devices, potential implications surely will grow, generating its own apps, websites and the like to show off the good, the bad and the ugly they capture.

And things will change – rapidly. In the past year, the company has released nine software updates, 42 Glassware apps, iOS support, prescription frames and more, all largely shaped by feedback from its Explorers.

Although there are suggested “rules” on how to use a Glass device, how each individual approaches the technology will determine both its future and the personal safety of the person using it. In other words, Google gently recommends that you don’t act like a “Glasshole,” the wearer is the ultimate judge of how to use the eyewear and when.

According to Google’s website, “Respect others, and, if they have questions about Glass, don’t get snappy. Be polite and explain what Glass does and remember, a quick demo can go a long way. In places where cellphone cameras aren’t allowed, the same rules will apply to Glass. If you’re asked to turn your phone off, turn Glass off as well. Breaking the rules or being rude will not get businesses excited about Glass and will ruin it for other Explorers.”

In March, Google also issued “Top 10 Google Glass myths,” a list that attempted to clarify Glass’s effects on the wider world. Privacy and ethical issues were among its topics, including concerns that “Glass is the perfect surveillance device.”

The article noted: “If a company sought to design a secret spy device, they could do a better job than Glass! Let’s be honest: if someone wants to secretly record you, there are much, much better cameras out there than one you wear conspicuously on your face and that lights up every time you give a voice command, or press a button.”

According to current users, a Glass wearer must speak a command (such as, “OK, Glass,” as demonstrated in Google’s online tutorials) or touch the device to start taking pictures or recording video. It is fairly obvious to both the user and the person being photographed because the Glass wearer has to look directly at someone to take a photo or video of them.

Much of the public commentary started in February after Sarah Slocum, a San Francisco-based tech enthusiast and writer, said via social media and media reports that she was attacked verbally and physically for wearing Google Glass in a bar. A second incident took place in mid-April when 20-year-old journalist Kyle Russell said he, too, was attacked as he walked to a San Francisco train station. Russell told news outlets that he believed the woman who snatched his Glass off of his face was bent on destroying it. The device was rendered useless after the incident, he said.

Most of the words you hear from Glass users and their run-ins with the general public are negative. “Worried” seems to be the most common word used when they discuss the reactions everyday people have to their unique eyewear.

Ari B. Adler is an avid Glass Explorer, receiving his eyewear in December 2013. Adler is the Press Secretary to Michigan Speaker of the House Jase Bolger and Director of Communications for the Michigan House Republican Caucus. He advocates for its use in essays on Michigan’s news sites and shares his personal experiences on his blog site.

While he notes that he wishes Glass could do more or do it better, Adler said he is smitten with the technology and hopes that more people will join him – as soon as they get over their presumption about how it works.

“I am in a unique position because of my job in the Legislature, so you can imagine how people wondered what I was recording or not and I got a lot of questions at first – based on public misinformation – about live streaming everything all the time, which of course is ridiculous,” Adler said in an interview with me.

Author Brenda Cooper, who lives in the Seattle area, said nearly all of her interactions with people about her Glass and status as a Glass Explorer have been positive.

“Most people are curious. A few are not, although when I show it to them or explain it to them, they generally stop being worried,” Cooper said.

Cooper noted that she thinks about where she chooses to wear her Glass. Her goal is to find places that enhance the interactions she seeks rather than harm them. So she makes a point of wearing it to tech events, science fiction writers’ groups or with other wearable enthusiasts. She also puts it on when she’s walking her dogs or going out to public parks

However, she avoids Glass in theaters, particularly performance or movie houses. Business meetings are another no-no. She also prefers to leave Glass at home in restaurants unless the other diner is a Glass Explorer as well.

Her personal rules mirror those that Google itself recommends. She said she has basically “no interest in pictures of random strangers,” and she believes few other Glass Explorers do either. “If I get one on accident, I just delete it,” she added.

“I always ask before taking a picture that’s clearly a picture of an individual. I might take group shots of parks or events which have people in them, and then I don¹t ask since it would be just too difficult and invasive to bother people to ask,” Cooper said. “Pretty much I treat it the same way that I treat a cellphone – and it would be far easier to take surreptitious pictures of strangers with a phone.”

People’s concerns about privacy aren’t really rational, Cooper says. Glass cannot be left on without completely draining its battery, she noted, and she ponders what she would do with all of that footage generally.

“We’re surrounded by cameras in stores and on streets and in people’s hands. There are cameras in so many places that most of us in urban environments are on camera more often than not when we’re out of our houses,” Cooper said. “I think it’s a bit of the fear of the unknown. Glass is the first truly different user interface to come along in a while, and I think that new things frighten some people.”

Whether Glass is an elitist item with its $1,500 price tag remains debatable, Cooper added. But its relatively high price may be part of the problem. Plus, having that distinctive eyepiece on your face combined with people’s irritation with the 24/7 world of connectivity may play into the fears.

“People are naturally wary of things that are exclusive,” Cooper said. “Google is huge now, and I think the worry about big corporations causing damage affects some people¹s view of Glass. … [Google] did a nice thing with the Glass Road Show over the last few weeks. Opening Glass to more people with the open sale [in April] was good. The more Google signals positive things about supporting Glass, the better.”

Limiting the kind of applications Glass uses or personal limitations on certain functions such as facial recognition also will help, Cooper believes.

“For example, I want facial recognition, but only on a limited basis, like something that will work for my social-media contacts. I have no interest in having information about strangers or in strangers having information about me – I’d find both rather creepy to be honest,” Cooper said.

Perhaps Google and its early adopters were too aggressive in trying to show off how “cool” they were to have the first versions of the product. Maybe the rest of the world was too eager to show how little we collectively cared. How the two haves/have nots get along in the months and years to come are likely to determine whether Glass can become ubiquitous like its developers hoped it would.

 

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

Ethics of Ride-Sharing

 

The Internet has upended a whole range of businesses and business models. The music industry is still struggling to deal with the fact that online the price of listening to a song has effectively plummeted to $0. Meanwhile, newspapers and magazines are trying to figure out how to compete or take advantage of rapidly decreasing ad revenue and the competition from oodles of free content online. Brick-and-mortar bookstores have collapsed as Amazon gets more books to more customers more inexpensively.

And now, surprisingly, the next sector that looks like it might be done in by digital is the taxicab industry.

At first, it seems like cabs would be relatively safe from online commerce. After all, cabs are out here in the real world; you can't catch a ride from a digital file. But despite that barrier, several new companies, such as Uber and Lyft, have figured out a way to use digital technology to create a new model for getting customers from place to place. The companies are based around digital apps that allow clients to request a ride. The driver nearest to the client is then notified and goes to pick up the client.

This is an innovative use of technology and could certainly be convenient for some people in some situations. And the ride-sharing companies have another, overwhelming advantage: They aren't regulated.

Driving is so much a part of life in the U.S. that people tend not to think about how dangerous it is — but that doesn't change the fact that it's really quite dangerous. In 2012, there were 33,561 traffic fatalities in the U.S. In comparison, there were only 153 airplane fatalities in ten years combined, from 2002 to 2012.

Because cars are dangerous, they are heavily regulated in terms of safety — seatbelts, airbags, insurance. Commercial vehicles like taxis are even more closely regulated. As Scott McCandless, a taxi driver in Atlanta explains:

"Taxicabs are required to display the meter price, vehicle and company identification and contact phone number on both sides of the vehicle to protect the public and the consumer. They are required to display a top sign, usually marked ‘Taxi,’ to announce that it’s a public vehicle for hire, in order to prevent discrimination. They must carry special, expensive, vehicle-for-hire insurance, and be regularly inspected to protect public safety. They also must display the driver’s permit and the taximeter in plain sight to protect the consumer."

Uber and Lyft can avoid all of that. As McCandless notes, they are on call only for people with cellphones, credit cards and access to the app, which may exclude people with lower incomes or fewer resources. They do not have to be wheelchair-accessible either. If ride-sharing services drive taxis out of business, who will serve people with disabilities?

Even more disturbing is the question of insurance. UberX, the portion of Uber's service which relies on ride-sharing, carries a commercial insurance policy with $1 million of coverage per incident. However, in most cases, the policy is only applied if the insurance of individual drivers does not cover the accident. And further, the insurance does not apply unless drivers are driving to pick up a client or already have a client in the car. This is a problem because using a car for commercial purposes often negates a driver’s personal insurance. Thus, when an UberX driver hit a young girl between fares on New Year's Eve 2013, neither the driver's insurance nor Uber's seemed to apply (the family is now suing Uber). Similarly, a San Francisco UberX driver named Bassim Elbatniji told Reuters that his insurance refused to pay when his car was involved in an accident in September 2013. His passengers are suing both him and Uber.

In large part because they have lower (and often inadequate) insurance, and do not have other costs associated with taxi regulations, ride-sharing companies are cheaper. Cincinnati.com reports that UberX offers a $2 base fare and $1.40 per mile; taxi companies in Cincinnati offer a $4 base fee and $2 per mile. Obviously, if UberX had to have the kind of insurance coverage that cabs do, it couldn't charge such low fares and it wouldn't have the same competitive advantage. William Rouse, the general manager of Los Angeles Yellow Cab, says the lack of regulation allows ride-sharing services to charge fares 30 percent below those of cabs.

An opinion piece by Mike O'Brien and Rich Stolz in the Seattle Times suggests substantially increasing the number of taxi licenses and adding 300 licenses for app-based ride-sharing services. They also recommend "uniform safety and liability standards, such as vehicle safety inspections, driver training requirements and liability insurance requirements." In other words, ride-sharing services would have to meet the same standards as taxis. Meanwhile, in Houston, the city council has slowly moved to consider rule changes. Ride-sharing companies have moved quickly into the market, however, and cab companies are now suing in federal court, arguing that the companies are defying city ordinances and thereby taking money from the law-abiding taxi companies.

Ride-share services argue that regulating the industry will destroy it. They say they rely on semi-professional drivers who would not be able to afford expensive insurance or safety inspections. These considerations are serious ones. New business models and new services are good for consumers, and it's clear that there is a large demand for the convenience and reliability of ride-sharing services. The tight regulation of the number of cabs in many cities, and the large entrance price in many places (up to $65,000 in Atlanta) has obviously restricted growth and hurt consumers and cab drivers alike. Taxis could in many cases stand less regulation.

But the answer isn't to do away with regulation altogether. Commercial driving services are too dangerous to operate outside the law. Ride-sharing services should not be allowed to keep rates low by transferring their insurance costs to drivers, passengers and the hospitals and emergency personnel who have to deal with injuries and accidents. Ride-shares need to have the same safety and insurance regulations as taxis. If that means that they have to figure out new ways to be profitable — well, driving is a risky business.

 

Noah Berlatsky

Noah Berlatsky edits the comics and culture website the Hooded Utilitarian and is a correspondent for the Atlantic. He is working on a book about the original Wonder Woman comics.

Whistleblowing Ethics

 

The term whistleblowing is thought to be derived from the police tradition of many decades ago of blowing their whistles to summon more police when they witnessed a crime being committed. Those were the days when a police beat was patrolled on foot. Now whistleblowing means calling attention to, and/or the disclosure of wrongdoing in any organization — public or private.

Hailed as both a patriot and a traitor, Edward Snowden will long live in the annals of notorious government whistleblowers. Many advocates of privacy and civil rights regard him as heroic and courageous. Government officials, large segments of the media and a substantial portion of public opinion denounce him as the whistleblower who most damaged his country's national security.

Snowden, who worked as a contractor for the National Security Agency, downloaded and stole countless classified documents from the intelligence agency's computers and leaked thousands of them to the media.

U.S. intelligence agencies and Pentagon officials claimed the leaks would cost untold damage to the nation's security and require millions of dollars and years to repair.

Snowden says he was driven mainly by conscience to disclose the NSA's secrets. He was interviewed and quoted at length in the May 2014 issue of Vanity Fair, apparently using the magazine to tell his side of the story and to justify his actions.

"When you are in a position of privileged access you see things that may be disturbing," Snowden said. "Over time that awareness of wrongdoing sort of builds up."

The wrongdoing Snowden was referring to was the NSA's surreptitious and illegal surveillance of American citizens, including their phone calls, emails and Internet use.

Snowden also revealed that the NSA was spying on top-level officials of foreign governments, among them German Chancellor Angela Merkel.

"I took an oath to support and defend the Constitution, and I saw the Constitution was violated on a massive scale," Snowden said. That constitutional violation, according to Snowden, was the NSA's domestic spying.

Snowden's actions immediately ignited a debate between the imperatives of national security and a U.S. citizen's right to privacy.

"There's a limit to the amount of incivility and inequality and inhumanity that each individual can tolerate; I crossed that line," Snowden said.

The Whistleblower Protection Act of 1989 protects federal employees who disclose wrongdoing, fraud, waste, mismanagement or other malfeasance. No retaliatory action may be taken against such whistleblowers. Additional whistleblower protections have since been added including a Presidential Policy Directive issued by President Barack Obama in 2012, protecting government employees of intelligence agencies.

But, of course, the law does not protect employees of the federal government who leak secret documents. Despite Snowden's claim to be defending the U.S. Constitution, he has been charged by the U.S. Department of Justice with espionage. If he is tried and convicted of espionage, he would face a long-term sentence in a federal prison.

One highly placed military intelligence officer reportedly thinks Snowden may be a spy for some foreign country, most likely Russia or China.

Snowden's leak is not unprecedented. There have been other government employee whistleblowers. A previous massive disclosure of government secrets occurred in 2010 when a website called WikiLeaks ran classified documents stolen by U.S. Army Pvt. Bradley Manning. The site, operated by Australian Julian Assange, was notorious for posting government secrets. Manning is currently serving a 35-year sentence for espionage. Assange lives in London's Ecuador Embassy, where he sought political asylum to avoid extradition to Sweden where he faces a sexual assault charge.

But not since Daniel Ellsberg leaked the so-called Pentagon Papers in 1971 has there been a more controversial unauthorized disclosure of secret government documents.

Ellsberg was a RAND corporation analyst who had access to a secret U.S. government report of some 7,000 pages detailing American relations with Vietnam, formerly French Indochina, from 1945 to 1967. Ellsberg, an opponent of the Vietnam War was charged with 12 different felonies, all of which were dismissed in 1973.

Both Snowden and Ellsberg were seemingly motivated by ethical impulses and apparently ignored the potential consequences of their actions. Other equally well-meaning people, many presumably also ethically concerned, decried these violations of the law. A dilemma for public sector whistleblowers is that sometimes the disclosure of what they perceive as wrongdoing may be a criminal act, i.e., revealing classified documents, proprietary information or clandestine activities.

Private sector whistleblowers in publicly traded companies are also protected. The Sarbanes-Oxley Act of 2002and the Private Sector Whistleblower Protection Streamlining Act of 2012 both prohibit any retaliatory action by an employer against a whistleblower employee for disclosing wrongdoing.

Among the more newsworthy private sector whistleblowers is Sherron Watkins, a former vice president of corporate development at the now-defunct Houston energy firm Enron who made national headlines in 2001 and 2002. Watkins was an internal whistleblower who wrote a memo to Enron's CEO, Ken Lay, warning of accounting fraud at the firm. She later testified publicly before the U.S. Senate and the House of Representatives.

In 2002, Time magazine named Watkins one of its three "Persons of the Year," along with Cynthia Cooper of WorldCom and Colleen Rowley of the FBI, tagging the trio "The Whistleblowers" in an issue of the publication devoted to that theme. Cooper was vice president of internal audit at the firm and led an investigation, which revealed that WorldCom had perpetrated the largest accounting fraud in U.S. corporate history. Rowley was an FBI special agent and disclosed endemic problems at the bureau and in the intelligence community.

To guarantee the anonymity of whistleblowers and tipsters, many news publications maintain websites or other methods whereby information about wrongdoing may be revealed without attribution to a specific source.

Among several not-for-profit organizations protecting whistleblowers, the Government Accountability Project (GAP) is one of the largest. Founded in 1977, the GAP is a non-partisan public-interest group which litigates whistleblower cases, helps expose wrongdoing to the public and actively promotes government and corporate accountability, according to its website. The organization says it has helped more than 5,000 whistleblowers since it was founded.

The GAP lists four ways to blow the whistle:

- "Reporting wrongdoing or a violation of the law to the proper authorities, such as a supervisor, a hotline or an Inspector General.

- Refusing to participate in workplace wrongdoing.

- Testifying in a legal proceeding.

- Leaking evidence of wrongdoing to the media."

But a major ethical dilemma faces potential whistleblowers: loyalty to their employers, or to customers if they're outside vendors, service providers or consultants, whether to government, business, academia, the press or other institutions.

Ethicists and people of conscience, however, recognize a higher loyalty – the imperative to call attention to wrongdoing for purposes of a higher good. A potential problem is the definition of higher good; people of conscience may disagree on what that means.

When several reporters for The Washington Post and Britain's Guardian newspaper were co-recipients of the Pulitzer Prize this year for their articles based on Snowden's leaked information, Snowden claimed the prize vindicated his actions.

Snowden is now living in Russia under a temporary but renewable grant of asylum, a fugitive from American justice. Ironically, he has said that he does not want to live in a country that spies on its citizens. Yet the Russian government is notorious for domestic surveillance.

Snowden still retains hundreds of thousands of top-secret NSA documents stored in laptop computers and thumb drives. U.S. intelligence agencies are worried that Snowden will continue to leak secrets to the world's media, further damaging U.S. national security.

Meanwhile, the ethical debate between national security, a citizen's right to privacy and where an American citizen's loyalties properly belong continues.

 

Marc Davis

Marc Davis is a veteran journalist and published novelist. His reporting and writing has been published in numerous print and online publications including AOL, The Chicago Tribune, Forbes Online Media, The Journal of the American Bar Association, and many others. His latest novel, Bottom Line, was published in 2013.

Is sting tweeting ethical?

 

Live-tweeting has become a staple of journalism and reporting. People post running tweets from lectures, from sports events, and from natural disasters. There has even been live-tweeting from police stings or raids, especially on sex workers — a practice that raises disturbing ethical dilemmas.

On November 6, 2011, New York Times reporter Nicholas Kristof posted a series of explosive tweets to his account.

"Joining raid on brothel in Cambodia that imprisons young girls," his first tweet announced. He then detailed the successful raid ("Girls are rescued, but still very scared. Youngest looks about 13, trafficked from Vietnam"), and the tense aftermath ("Large crowd gathered outside brothel. Police seem wary of brothel owners or military friends staging attack.")

Kristof participated in the raid along with Somaly Mam, a Cambodian trafficking survivor and anti-prostitution activist who has recently come under scrutiny for allegedly paying young women to lie about having been trafficked and raped, and for other ethical breaches. The recent questions about Mam's work underscore Kristof's problematic use of twitter to turn a brothel-raid into a public spectacle. In Playing the Whore, Melissa Gira Grant explains that, “it goes without saying that [Kristoff] published all of this without obtaining [the women's] consent."

In addition, Human Rights Watch has reported that in Cambodia "police and other authorities have unlawfully locked up sex workers, beaten and sexually abused them, and looted their money and other possessions." Kristof enthusiastically reported that "rapes are over," but in fact there is no reason to think that sex workers "rescued" in Cambodia are safe. They could well be in even more danger.

As Laura Agustín, author of Sex at the Margins: Migration, Labour Markets and the Rescue Industrysaid in an article on her website.

"The real disorder in Kristof’s blithe chirping about brothels closing is the absence of responsibility towards the people working in them: where did they go? how will they live? do they have a roof over their heads now? How can he not understand that this is just how trafficking can happen, in his own sense of the word?" [emphasis Agustín's]

The comfortable narrative of the “good guys” swooping in to make a heroic rescue is fatally flawed. The authorities in this situation are not necessarily trustworthy, nor are they necessarily working to help the women being "rescued." Kristoff's twitter account of the raid simplified the plight of sex workers in order to provide a more sensational and upbeat story.

The use of live-tweeting has also been used by law-enforcement agencies. For example, Birmingham police live-tweeted a drug raid in 2011. Again though, the greatest publicity seems to focus on live-tweeting of actions taken against sex workers. In early May 2014, Prince George's County Police Department in Maryland announced that it would live-tweet a prostitution sting. The PGPD issued a press release stating: "From the ads to the arrests, we’ll show you how the PGPD is battling the oldest profession." Though they later clarified that they would be targeting johns rather than prostitutes, a publicity photo showed police with a woman in handcuffs. The PGPD did not respond to my request to clarify whether the woman had been arrested for prostitution, or if she had given consent to have her photo used.

There was much criticism about the prospective sting, and the PGPD ultimately decided not to live-tweet the operation, saying that doing so might endanger the undercover agents involved. The PGPD said that they had announced the live-tweeting as a way to scare off johns, and when they made no arrests, they said that the tactic had worked.

Sienna Baskin, the Managing Co-Director of theSex Workers Project at the Urban Justice Center,explained to me the problems she sees with live-tweeting, and with stings.

"I believe that it is unethical for a variety of reasons. Prostitution stings are usually orchestrated by undercover police officers who pose as potential clients of sex workers to “catch” them in the act of prostitution. Subjecting targeted prostitution suspects not only to arrest, but to publication of details that may jeopardize their privacy on the basis of misrepresentation is unethical, especially when such stings often result in the suspect putting him or herself in sexualized situations. The goal of these publicized and tweeted stings seems to be publicity for the law enforcement agency, and therefore the human rights and interests of the targeted sex workers are not considered."

Baskin adds that, "the public tends to be titillated by any stories involving sex work, due to the highly stigmatized nature of sex work and of sexuality itself in our culture." People naturally find stories about sex workers exciting and entertaining and simultaneously. As a result, journalists and police have an incentive to tweet about these topics in order to draw attention and potentially promote or market themselves. Just as Somaly Mam appears to have created sensational stories to encourage donations for her organization, so Kristoff and the PGPD may reap career benefits if they are seen as fighting the good fight. So the PGPD claimed to want to prosecute johns and rescue women in the sex trade, but instead they advertised the sting with a picture of a woman in shackles. Similarly, Kristoff’s desire for sensational images and copy seems to predominate over the ultimate safety of the women in danger.

Agustín argues that the main issue about live-tweeting of raids and stings is not necessarily the tweeting, but the action of the police. When police tweet compromising photos, they are not doing anything different in kind from their standard operating procedure. She wrote me that:

"Photos of arrested prostitutes have long been routinely posted online by police and reporters. To talk about the ethics of tweeting them and treating criminals as fair game you'd need to put it in context. There is no 'confidentiality' for many people either online or on police bulletin boards."

Agustín's point is well-taken. The ethical dilemmas that arise when tweeting about prostitution raids seem linked to the general ethical problem of treating sex workers as criminals. The excitement and sensationalism of live-tweeting is partly the reason that reports of policing and patrolling sex work is framed as titillating. It becomes about the enjoyment or satisfaction of a breathless public, rather than about the safety of the women involved. Live-tweeting prostitution raids seems like it's more a symptom than a cause. So an end to live-tweeting in and of itself wouldn't create a more ethical approach to sex work. But if we had a more ethical understanding of sex work, we surely wouldn't have Nicholas Kristoff, PGPD, or anyone, live-tweeting raids.

 

Noah Berlatsky

Noah Berlatsky edits the comics and culture website the Hooded Utilitarian and is a correspondent for the Atlantic. He is working on a book about the original Wonder Woman comics.

The Ethics of Cell Phone Photography

 

High powered celebrities contend daily with paparazzi dogging their steps and filming their every move. But now, even the Average Joes of the world need to be on their toes lest they find their faces plastered on a mobile blog, a Facebook page, or even displayed as part of an art exhibit.

Let’s face it, “street photography,” the appellation that describes those gritty, artful portraits of the human condition as it plays out in the public eye, is a far cry from surreptitious snaps of the overweight, underdressed or simply unfortunate in the hugely popular People of Wal-Mart site. These unflattering shots are made temptingly easy by the discreet nature of the camera phone. In fact, the smart phone’s small size and universal appeal make it simple to take snaps of almost everything, everywhere.

In the past, even a small personal camera would have been obvious to standers-by, but today’s cell phone cameras let people take a picture while pretending to text, make a phone call or check their email. The small footprint of the cell phone allows covert photos to be taken “up the skirt”or “down the shirt”without a subject’s knowledge, making previously sacrosanct public spaces like locker rooms, public restrooms, and gyms a photographic voyeur’s delight. In addition, where past photos of this nature might only be shared amongst a small group, today’s Internet allows global distribution of these embarrassing, intrusive shots.

Indeed, cell phone photography has gotten quite a bad rap lately. Sixty-two percent of schools currently ban cell phone usage in the classroom due to concerns about cheating, cyberbullying and sexting. Courts are moving to ban the devices as well, removing cell phones from courtrooms in order to protect witnesses, victims, juries and even the accused from photographic intrusion that can put them at risk.

Questionable cell phone camera usage has become an issue for many medical service providers and their patients. For example, one hospital clinician in New Haven, Conn. took a picture of the fatal gunshot wound of a young victim, who later died, and shared it digitally with another staff member. From there, the photo found its way to numerous other individuals. The grieving mother is considering a lawsuit against the hospital for violating her son’s patient confidentiality rights and several employees involved in the scandal have been terminated.

Other institutions that deal with life-and-death situations such as fire departments and emergency services providers face similar problems. The Chesapeake Fire Department in Chesapeake, Va. conducted a study regarding the use of cell phones and, particularly, cell phone cameras in the public service workplace. After compiling their results, the department recommended strict guidelines for cell phone camera usage that placed a high priority on preserving the privacy of other employees and clients to avoid legal entanglements.

More recently, an Arkansas gynecologist came under fire for taking cell phone pictures of a client’s nether regions during medical examinations in his office. After his arrest, officials seized his cell phone and discovered “numerous images of nude females that appear to have been taken in a medical office during medical examinations.” The doctor has since been charged with five counts of video voyeurism, a Class D felony.

The Video Voyeurism Prevention Act was passed through Congress in 2004 to deal with the growing threat of privacy invasion through mobile technology. Specific wording in the law provides protection from video, camera or other images being taken and/or broadcast of an individual if they are in an area in which they have a “reasonable expectation of privacy,”such as a gym, a department store dressing room, emergency room, or public restroom. Violators can expect up to $100,000 in fines and/or up to one year in prison. Many states have followed suit by enacting laws of their own that deal with digital age privacy violations.

But where does that leave the People of Wal-Mart? Modern tort law gives us four models for invasion of privacy violations.

- Intrusion of Solitude: this comprises physical or electronic intrusion into the private quarters of an individual including trespass, secret surveillance and misrepresentation;

- Public Disclosure of Private Facts: this alludes to the broadcast of truthful information, which a reasonable person would find objectionable;

- False Light: this refers to the publication of facts that put a person in a false light even if the facts themselves are not defamatory; and

- Appropriation: the unauthorized use of a person’s name or likeness to obtain some benefits.

Initially, it might seem hard to find fault with the site. The people pictured are in a public place with no reasonable expectation of privacy and there are no private facts being divulged. However, each of the photos is captioned in an amusing, but often derogatory way, which can amount to defamation of character and thus, libel. In addition, I would argue that the site owner of People of Wal-Mart is obtaining quantifiable benefits from the unauthorized use of these people’s images to drive traffic to the site, which features copious affiliate advertising from which the site owner profits. Posting these photos in order to build site traffic may amount to appropriation, but so far none of the “victims”have filed suit.

So legally, it’s a stretch. But ethically? Not so much. One particularly troubling People of Wal-Mart photo shows an elderly gentleman with an incontinence problem. We don’t know if the gentleman in question was aware there was an issue with his clothing, but we can be reasonably certain he didn’t realize his photo was going to be broadcast on the Internet for his children, grandchildren, friends and family to see. What if that was your father or grandfather? Or what if an eccentric or even mentally troubled friend or family member is the “Featured Creature”for the day? Certainly, the website is at best dehumanizing to its subjects—the captions under each picture make that crystal clear. But the intent to ridicule is present just by disseminating the photos in the first place.

The same ethical questions should be asked when we photograph people in difficult situations: the homeless, the poverty stricken, the physically challenged. Although they may be in a very public place with no seeming recourse to a privacy defense, they should receive protection from the standpoint of ethics. Snapping covert pictures of the ridiculous or unfortunate and broadcasting them on a worldwide forum like the Internet is tantamount to defamation of character. Our digital world ensures that every picture has a wide audience, subjecting the person photographed to widespread disdain, mockery or scorn, potentially resulting in embarrassment at the very least and possibly even the loss of a job or a relationship. The cell phone, with its easily concealed camera and low profile, allows people to not only satisfy morbid curiosity, but to share it with the world. Just last year, a known homeless man was found dead in downtown Houston with his lower body unclothed. According to police, not only did people not stop and help, but some even took pictures with their cell phones. One woman who was interviewed noted, “That’s not nice. Because that was somebody’s son. Somebody’s brother.”She could not have been more correct. Such unkind memorialization of someone’s misfortune is just plain ugly.

These scurrilous impromptu photo shoots are not victimless crimes—nor are they “art”. There is a real intent to demean, debase or disparage the individuals who are photographed and digital media makes it easier, faster and more covert. In the past, the erstwhile subject would at least be aware that his photo was being snapped. Cameras were large, obtrusive and even loud, as shutters captured an image. Digital phones allow camera shots to be taken furtively and silently, while the subject remains largely oblivious and our digital interconnectivity allows photos to “go viral”instantly —giving victims of this newfound street photography little recourse to protect their dignity.

While the legal debate over digital privacy invasion continues, cell phone owners have the freedom to take pictures and videos of anyone venturing out in public. Let’s hope that most will have the sound judgment to understand the impact of such bits of information on the lives of their subjects and confine their photography to people whom they know, or at least whom they’ve asked for permission.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Codes of Ethics for Online Reviewing

 

The Internet has made reviewers of us all. Every positive or negative consumer experience can be shared immediately. The law of averages suggests that in the long run these reviews, when taken together, will provide an accurate reflection of consumer experiences. However, that does not absolve individual reviewers of certain ethical standards and obligations.

With 85 percent of consumers reading Yelp, reviewers have an obligation to be honest, disclosing any bias or conflict of interest. Online reviewers may not be bound by the Association of Food Journalists’ code of ethics, which states that writers must use their real names, fact-check their info, and visit a restaurant multiple times—or even the Food Blog Code of Ethics, but online reviewers should adhere to the following ethical standards:

  1. Disclose clearlyif you received payment, freebies, or other compensation
  2. Don’t praise a business if you’re personally connected

The FTC requires bloggers to disclose any compensation. To not do so is breaking the lawbut, interpretations of that vary. One amateur fashion blogger I follow indicates clothes were gifts with a small “c/o [brand]” at the end of the post in question. But “c/o” might not be understood by casual readers to mean, “They sent me this, so I’m giving them free publicity.” I know a blogger’s gotta eat, but reviewers should go out of their way to be transparent. Even if you technically aren’t breaking any laws, being shady about free stuff is a good way to ruin your reputation and lose readers.

Yelp reviewers should adhere to the same standard of disclosure, but sometimes the lines are blurry. Yelp Elite members, for instance, regularly are invited to restaurants and bars for free tastings. I’m one such member, but I’ve never gone because it felt underhanded to me – “There’s no such thing as a free lunch” and all that. At my first event last month, I asked more experienced Yelpers if we were, indeed, expected to write glowing reviews in return for free mini-pies and cocktails (even though that had never been explicitly stated), and the sentiment was yes. A simple: “I ate for free, thanks to a Yelp event,” in a review would probably suffice, but it still made me uncomfortable.

Whole Foods’ CEO, John Mackey, caught a lot of flak in 2007 when it was revealed that he’d been extoling Whole Foods on Internet forums for the past eight years under the name “Rahodeb” (an anagram of his wife Deborah’s name). The New York Times reported that Mackey wrote more than 1,100 posts on Yahoo Finance’s online bulletin board, many gushing about Whole Foods’ stock prices. (He even complimented his own appearance when another commenter insulted it, writing, “I like Mackey’s haircut. I think he looks cute!”)

An NBC broadcast at the time mentioned the Securities Exchange Act of 1934, section 10 of which prohibits “fraud, manipulation, or insider trading.” Mackey’s actions certainly seem to have been manipulative. As former SEC chair Harvey Pitt told NBC, “There’s nothing per se illegal about [Mackey’s actions], but it’s very clear to me, if you’re not willing to do it under your own name, you really have to ask why you’re doing it at all.”

Mackey later defended himself by saying, “I never intended any of those postings to be identified with me.” Well, obviously. Yet the truth came out, making Mackey look dishonest, untrustworthy, and morally suspect. Not only was it unethical, but it also tainted the Whole Foods brand. Translation: If your sister’s vintage boutique needs more customers, don’t write her a fake Yelp review.

  1. Similarly, don’t smear a brand or restaurant out of spite
  2. Realize the ramifications your review could have

Mackey also dissed Whole Foods’ then-competitor Wild Oats in 2005, writing this on a Yahoo stock message board as Rahodeb:

Would Whole Foods buy OATS [Wild Oats’ stock symbol]? Almost surely not at current prices. What would they gain? OATS locations are too small...[Wild Oats management] clearly doesn’t know what it is doing...OATS has no value and no future.

Only two years later, Whole Foods was buying Wild Oats for $565 million in a merger challenged by the FTC. Was Mackey trying to drive people away from Wild Oats so his company could buy it at a cheaper price? Only Mackey knows, but it doesn’t make him look very good.

On a personal level, a former friend of mine was bitter toward a former employer after she quit. “I’d write them a bad Yelp review, but they’d know it was me,” she said before giving me a meaningful look. “YOU should write one about how they’re so awful!” she said, only half joking, even though I’d never used the company’s services. (I didn’t.)

I can’t be the only one this has happened to. In fact, a look at any business’ filtered Yelp reviews – the ones you don’t see unless you go hunting for them and enter a Captcha, a type of challenge-response test used in computing to determine whether or not the user is human), shows that fake bad reviews (likely by competitors and their families) are all too common. If you’ve got beef with a company, personal or professional, don’t let it color your actual experience there. No inventing cockroaches to scare diners away from your rival café! Sure, write a negative review if you received legitimately poor service or products, but take it up with management if it’s a bigger issue.

At the risk of sounding overdramatic, these are real people here; real families’ livelihoods are at stake. In an earlier piece for the Center for Digital Ethics, Kristen Kuchar noted that the addition of a single star in a Yelp rating could potentially boost business by 5 to 9 percent, according to Harvard Business School. Should one irate customer have the power to get a waiter fired or put a bistro out of business? Your words can live on even if you delete your review, and you never know the impact they could have on someone down the road.

The consumer mindset means we think opening our wallets entitles us to the royal treatment, but remember that everyone has bad days. If at all possible, visit a business more than once to get a more fully rounded view of the experience there. If nothing else, it’ll make your review more helpful to others.

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.

 

Social Media Strategies for Complaining

 

It’s a universal experience: You need to contact a business to dispute a bill, to ask a question or to grumble about a faulty product or poor service. So you dial a 1-800 number and listen to a recorded message: press “1” for this, press “2” for that. You wait on hold and try to tune out the smooth jazz background music as minutes tick by. Finally, someone picks up. But you’re transferred, disconnected or told to leave a message. The rigmarole goes on.

It doesn’t always happen that way. But frustrated consumers remember when it does, and they choose to take faster, more public routes to reach businesses: They log on to social networks like Twitter and Facebook.

Cable guy doesn’t show up? Google the company’s Twitter handle or Facebook page and type away.

There are some wild success stories: One woman spent a year trying to get a charge—a whopping $1,126 for a 2-inch bandage—removed from a hospital bill over the phone, as reported in a feature by WBUR, Boston’s NPR station. Then she posted her ordeal on the hospital’s Facebook page. Voila! Her account was credited by the next morning.

Some industries, airlines in particular, have earned reputations for very fast digital customer service. American Airlines representatives replied to complaints on Twitter in 12 minutes, sending more than 1,000 Tweets on an average day, in a study by travel intelligence company Skift. Microsoft’s video game brand Xbox holds the “Most Responsive Brand on Twitter” Guinness Word Record for answering thousands of questions a day in an average time of 2 minutes and 42 seconds.

An average response time across industries is closer to 11 hours, according to The Sprout Social Index, a December 2013 report that analyzed social media trends. Still, it’s faster than privately emailing customer service, as proven in a BBC experiment.

In addition to speed, accessing customer support digitally is convenient. You can send a quick 140-character Tweet from anywhere. It beats cradling a phone on your shoulder while you wait on hold, shushing everyone around you for fear of missing the long-distance service agent pick up.

But the most alluring reason for lodging a complaint on Twitter or Facebook is that social media is public. Jaded consumers can argue that traditional approaches for accessing customer service are easy for companies to shove aside. One pesky customer means little in the long run. But a complaint posted online for the world to see can seriously hurt a brand’s public image.

Consider the famous “United Breaks Guitars” example. United Airlines baggage handlers damaged a musician’s expensive instrument. For months, customer service representatives refused his claims for compensation. So he recorded a music video about the experience and posted it on YouTube. It went viral, drawing 150,000 views within a day and 5 million in a month. The media picked up on the story and United Airlines had a public relations embarrassment on its hands.

Those without talent or means to protest via music video need not worry. On Twitter, all of your followers will see your message, unless you choose to make the conversation private. And so will the company and anyone who searches its Twitter handle—an important point for those without many followers.

There are strategies to increase the message’s exposure: “Be sure to include the most shocking or interesting bit of information,” recommends social media expert Lauren Dugan in an article on website AllTwitter. To get more notice, she suggests tagging other accounts in the tweet, like “prominent journalists, consumer advocacy groups, [and] local politicians.”

These tactics sound aggressive, even threatening: Address our grievances quickly and generously, or else. On the one hand, many customers believe they have a right to good service and deserve fair treatment. When they can’t get it in-person or over the phone, it seems justifiable to turn to a sphere where companies will feel obligated to respond.

But will this digital strategy render the old ways obsolete? Will it become a permanent societal expectation: To have complaints heard and resolved, consumers must share them in a public space? Will people unwilling to sacrifice privacy—those uncomfortable broadcasting their medical bills on Facebook, for instance—be out of luck?

On the flip side, consumers now have an easy public venue to blow things out of proportion and to dishonestly represent a brand, product or business. How should a company appropriately refute a false claim that’s been posted on a social network? They can delete the post, ignore it, respectfully disprove it in a public or private message, or, in extreme scenarios, seek legal action.

Despite the risks, there are definitely positives surrounding the digital customer service trend for companies. A business can use social media for customer support to boost its reputation: to show that it cares about its patrons and to prove that it can adapt to technological innovation. And it can send individuals and large audiences important messages fast. Salesforce outlines seven types of customer support tweets, including the “We’re really sorry,” “Bad stuff is happening” and “Here’s a quick fix” tweets.

Consumers should remember that their social media messages to companies don’t always have to be complaints. But let’s face reality: People are more inclined to contact customer service with a criticism than with praise. When that’s the case, consumers don’t have to phrase their sentiments with hostility.

Remember the old saying, “You catch more flies with honey than with vinegar.”

It’s important to note that many brands don’t respond to every complaint posted on social networks. In fact, 80 percent of consumer inquiries on social networks go unanswered by companies, according to The Sprout Social Index.

For many customers, getting a direct response is not necessarily the point. Rather, they want their friends and followers to see it. Often, they want to warn others to avoid a company or product, much like in a review on Amazon or Yelp, but just targeting people they know. Sometimes complainers just want to commiserate, to start a dialogue about shared experiences.

 

Who Are You Online?

 

Often a person’s online identity is created at the time of (or even before) their birth. Parents’ eagerness to share the good news leads to Facebook statuses that include the baby’s name, Pinterest boards dedicated to photos of a child, or Instagram photos hashtagged with a child’s nickname. This imprint on the Internet can be a permanent timeline, can be very difficult to undo, and can unintentionally risk a child’s privacy.

In a Parents.com article, Jo Webber, Ph.D., notes the permanent nature of publishing information on the Internet that could be shared by anyone.

"Parents should talk to their kids and get them to understand that once [privacy is] gone, it's gone. You don't know who's getting that information on the other end," Dr. Webber says.

This is equally true for parents. "Whatever you put out there on social media about your kids, think about whether you'd be happy if everybody had that information," Dr. Webber advises.

Kids and Facebook

Parents hold the keys to the Internet for their children. The introduction of web-connected devices, monitoring of social media apps and granting of permission for participating in social networking online are all foundations of a child’s digital life over which parents have control. Facebook requires that children reach 13 years of age before opening an account, but parents often allow kids to get Facebook accounts at an earlier age.

CNN reported a 2011 Consumer Reports survey found 7.5 million people younger than 13 use [Facebook]; nearly a third of 11-year-olds and more than half of 12-year-olds with their parents' knowledge.

"Whether we like it or not, millions of children are using Facebook, and since there doesn't seem to be a universally effective way to get them off the service, the best and safest strategy would be to provide younger children with a safe, secure and private experience that allows them to interact with verified friends and family members without having to lie about their age," Larry Magid writes at Forbes.com.

After applying in 2012, Facebook recently (June 2014) obtained a patent that would allow kids under 13 to join the site with parental permission. The Guardian reports that:

“The parent would first have to verify their own identity, followed by their relationship with the child before allowing the creation of a child's account. Parents would then have parental controls tools to restrict access to certain content, friends and third-party applications...Child accounts would also have strict privacy controls privacy and permissions, allowing parents to approve certain actions.”

Internet Laws Protecting Children’s Privacy

There are several laws designed to protect the online identities of children. The Children’s Online Privacy Protection Act (COPPA) is a federal law that prohibits U.S. companies from collecting personal information from children under 13 years of age, and currently an amendment, the Do Not Track Kids Act, is pending, which would add the inclusion of teens between the ages of 13-15. The legislation also provides for an “eraser button” that would grant teens and their parents more control over online information gathering. Facebook’s new policy allowing kids under 13 years old to join the site would have to be compliant with COPPA before implementation.

Teens and Social Media

It’s no secret that teens participate in social media regularly, often bouncing between Facebook, Twitter, Instagram, Kik, Snapchat and other apps multiple times a day. While it’s usually adults who report social media burnout, is it possible that even teens are getting tired of scrolling through endless feeds?

The Electronic Privacy Information Center (epic.org) notes that teens actually desire more online privacy:

According to the report, only 11% of teens currently share "a lot about themselves online" - a 7% decrease from the same age group last year. By contrast, 17% of young adults aged 19 to 24 and 27% of adults aged 25 to 34 currently share "a lot about themselves online." The report also indicates that "about 18% of teens share content on social media at least once a day, including status updates, photos, pins, or articles, compared with 28% of 19 to 24-year-olds and 35% of 25 to 34-year-olds."

Effects of Social Media Profiles

Kids who think social media can’t impact their lives in meaningful ways are sadly mistaken, as often college acceptance and job security are directly on the line. Forbes posted an article about what college admissions don’t like seeing on social media profiles, reporting a rise in Internet searches by admissions officers:

“While the percentage of admissions officers who took to Google (27%) and checked Facebook (26%) as part of the applicant review process increased slightly (20% for Google and 26% for Facebook in 2011) from last year, the percentage that said they discovered something that negatively impacted an applicant’s chances of getting into the school nearly tripled – from 12% last year to 35% this year. Offenses cited included essay plagiarism, vulgarities in blogs, alcohol consumption in photos, things that made them “wonder,” and “illegal activities.” In 2008, when Kaplan began tracking this trend, only one in 10 admissions officers reported checking applicants’ social networking pages.”

This CNN article lists ten real-life people who list their jobs as a direct result of social media postings. Not only can what a person says or does get him or her fired, so can their actions on social media. In the CNET article “Facebookers, beware: that silly update can cost you a job,” we learn:

“According to a new report, turning down young job candidates because of what they post on social media has become commonplace. The report, by On Device Research, states that 1 in 10 people between ages 16 and 34 have been turned down for a new job because of photos or comments on Facebook, Twitter, Pinterest, and other social networking sites.

‘If getting a job wasn't hard enough in this tough economic climate, young people are getting rejected from employment because of their social media profiles and they are not concerned about it,’ On Device Research's marketing manager Sarah Quinn said in a statement.”

Best Practices for Establishing an Online Presence

At socialmediatoday.com, consultant Joellyn Sargent lists a few tips to help people remember the importance and permanence of their online profiles:

- “People are watching (That creepy guy at the mall? Yep, he’s online and he can read your Twitter stream.)

- The Internet never forgets… people thinking about hiring you can pull up all those old messages you forgot about and WOW…won’t they be surprised?

- What about right now? Would you stand up in front of a million people today and do that sexy dance or act like an idiot or talk about how you drank too much when you weren’t old enough to drink at all?

It’s not a secret… Maybe your mom and dad don’t know you are on Twitter. You went behind their back and created that account, so no one will ever know except the 1579 friends you’ve collected on Facebook (including the ones you’ve never met). How many of those people are who they say they are? You can be anyone you want to be online, right? Do you really know your “friends”?

- You need to be careful online. The new “street smart” is “social smarts.” There’s trouble online waiting for you if you’re careless. And you might not see it coming. Protect your privacy online. Be careful what you post. Think twice. Would you want your grandma to see that? Then it probably shouldn’t be online.”

 

Open configuration options

Mary T McCarthy

Mary McCarthy is Senior Editor at SpliceToday.com and the creator of pajamasandcoffee.com. She has been a professional writer for over 20 years for newspapers, magazines, and the Internet. She teaches classes at The Writer’s Center in Bethesda, Maryland and guest-lectures at the University of Maryland’s Philip Merrill College of Journalism. Her first novel The Scarlet Letter Society debuted this year and her second novel releases in 2015.

 

Is Lulu ethical?

 

If you believe biology controls a portion of our thoughts and actions, then it makes sense that women are attracted to apps, websites and chat rooms that allow them to share their stories and commonalities. After all, my gender is known for its superb communication skills.

Women’s general desire to reveal their experiences is what makes an app, known as Lulu, such an interesting idea. According to its founders Alexandra Chong and Alison Schwartz, it is a piece of social media designed specifically for women. The app, for the most part, serves as a virtual space where women can share “girl talk,” or private conversations about the issues that interest them.

That is one description of Lulu. If you read sites such as The Daily Beast and Buzzfeed, and even established media companies such as Forbes, Lulu is akin to demagoguery. These web-based voices decry its sexist format, allowing its female-only constituents to “rate” the men they know or have dated. These anonymous ratings are shallow attacks on the male gender, critics say, and only serve to further separate the sexes. Lulu, in other words, is just as crass as crass can be.

So why has Lulu gained so much attention? It’s all about numbers, pure and simple. It was among the first “ratings” app, and it is the best known. In a little over a year since the creation of Lulu, the app has millions of users, including one in four U.S. college girls and more than one million guys, according to Lulu officials. Such a huge following makes Lulu somewhat like LinkedIn for professionals or Snapchat for teens – it is the go-to program of its kind.

The Lulu app is an ideal conversation starter in regards to what has happened to the Internet, online dating and love in the 21st century. An anonymous place for women to describe current or former lovers could be seen as the best use of the World Wide Web, providing a safe haven for females to talk openly about how they were treated in a relationship.

On the negative side, Lulu provides an ugly glimpse into the world of Internet trolls – those people who create controversy wherever their cursor roams. It shows only the dark side of human relationships, one might argue, catering to lust, anger and resentment instead of honest interaction. The postings on Lulu puts men in the uncomfortable position of owning up to past behaviors, atoning for perceived wrongs from any woman who decides to skewer them on the app or elsewhere.

Perhaps generations that saw the rise of birth control, the idolization of Bo Derek in a movie called “Ten,” and the bold conversations of sex therapist, Dr. Ruth Westheimer, are ready for such an app. Maybe rating one another on our cuddling abilities or sexual prowess is, indeed, appropriate for this day and age.

Yet there seems in all of the criticism, a level of discomfort with Lulu’s basic structure. Even Millennials – the generation that not only feels they have a right to see all of people’s personal thoughts and pictures on social media, but demand to see it – are a bit wary of how bold and aggressive the Lulu rating system seems to be.

A Lulu spokeswoman notes that the app may raise eyebrows, but that it also has the best intentions toward its users and even the men who are rated. In 2014, Lulu added a component that allows men to “opt in” to the app or remove themselves from the site with no questions asked, giving them some control over the information shared there.

In its infancy, Lulu has focused on guys and relationships, “an incredibly important topic for women,” according to Deborah Singer, director of marketing and public relations for Lulu. The long-term goal, Singer noted, is to build Lulu into a larger platform where women could talk about a number of topics, perhaps starting with areas including beauty or health.

“Our founders realized that women control more than 80 percent of consumer spending and dominate social media, yet no one is building social products specifically for them,” Singer said. “They saw a huge opportunity to tap into the value of girl talk – the private conversations women have with each other.”

While the initial focus has been on guys and relationships, the basic app platform is flexible enough to not only give women room to share their thoughts and feelings, but to expand into other places as well, Singer said.

“Think of Lulu as a two-sided platform in which women are the privileged side. For girls, Lulu is a place 1) to share experiences through reviews and 2) to search reviews and get information from other women to make smarter decisions. For guys, Lulu is a place to learn what women want and get better,” Singer said.

That is part of the reason that Lulu added the option for men to sign up on the app as users and allow the women they know to rate and review them, Singer said. Interestingly, others have seen the potential in Lulu’s platform – and they’re the kind of companies that want access to the female consumer.

“This has been part of our vision from the beginning – it's the other side of the platform – and it's something that guys and brands have been asking for since we initially launched Lulu in February 2013,” Singer noted.

Those brands want rating sites where the women are thinking about the people, places and things they love – or even dislike so strongly that they can share their thoughts in a cohesive, intelligent and passionate way.

“Everyone wants to know what women think, and Lulu is one of the few places where women actually share what they think,” Singer said. “More than 52 percent of Lulu's female users create reviews to share their experiences, which is incredibly high compared to other social networks. Today, women share their experiences about guys, but in the future it could be products or services. In fact, we hear every week from Fortune 500 brands who wants to partner with Lulu and find out what our audience thinks about their products.”

The desire to mix an app’s content with a brand’s reputation is why Lulu has staying power, Singer added. It is not a place where girls meet guys or the like. It is a new kind of space – one where users benefit just as much as the consumer-driven companies that are interested in partnering with Lulu.

“Stories about Lulu sometimes call us a dating app, which misses the point about Lulu's vision and what we're trying to achieve. We've started with dating and relationships because it’s an incredibly important topic for everyone, but we'll be expanding into all topics that women care about,” Singer said. “Lulu is all about unleashing the power of girl talk, which is very big and exciting. We're just at the beginning of our journey.”

That may be the app’s intention, but Lulu’s early reputation and controversial approach may not allow for such a transition to take place, argues Michael Bernacchi, a business and marketing professor at the University of Detroit-Mercy in Detroit.

Bernacchi has built his career on staying on top of current events and seeing trends even before the general public. His annual ratings of Super Bowl commercials receive a significant audience each year, and he is quoted frequently in local and national media. As such, he follows social media closely and sees how influential that medium has become.

Bernacchi said Lulu’s sensational start did exactly what its founders probably wanted – it made a big splash by being salacious and sexy. That kind of ribaldry will get you some eyes and “clicks” for the first few months (think of pop-music newcomer Katy Perry singing “I Kissed a Girl and I Liked It” to hype her first major record). Whetheryou can make it long term depends on how you make the transition and whether you’ve got the chops to handle it, he noted.

After all, first impressions tend to be the ones that last with consumers, Bernacchi said. To some extent, Lulu has already defined itself. The general public tends to have a long memory, and we remember just about everything about companies that start with scandal. Bernacchi agrees.

“That’s not to say that a re-cultivation or redefinition can’t be forthcoming and be successful. Having said that, that’s kind of a rough water to navigate,” Bernacchi added. “If I’m going to sell this app to somebody, I’d better know how they’re putting on their boots and how to deal with that. While it may be a very shallow look at this app, you can easily say on the other side, gosh, isn’t that how they have defined themselves? Isn’t that the perception that they’ve asked for?”

Bernacchi said he could see consumer companies and big brands sniffing around Lulu because of its size and household-name appeal. Whether they stick around after truly investigating the app, its users and the content on the site remains to be seen.

“It’s an interesting idea, creating a place where women could have discussions on a number of topics. A full-fledge discussion format could work,” Bernacchi said. “I’m sure there are a number of firms that want to buy into that to evaluate their products and services. But do they really know what that marketplace is about?”

Bernacchi noted that websites like Match.com and ChristianMingle.com handle dating situations with a little more subtlety. These dating sites go out of their way in television and other forms of advertising to discuss how they unite couples based on personality tests. The sitesemphasize how they use analysis and other characteristics about who their users are to find out whom is right for you.

Lulu, whether the definition truly fits, is largely seen by its critics as being shallow and using looks as a main determinate of people’s value. Acting in a shallow manner will turn off potential advertisers and business partners if the app stays true to its original form or even evolves only slightly, Bernacchi said.

“It may be naïve of them to think they can change midstream,” Bernacchi added. “That movie with Bo Derek had a plot that few of us remember. All that the public know is that it was about rating women by their looks. And that’s not an acceptable route for a movie or pretty much anything else to take these days. It’s one thing to have two or three guys talking together privately about how a woman looks. But it would be laughable for that movie to happen today. Our culture has moved in another direction.”

Any brand struggles to establish a strong identity, Bernacchi pointed out, regardless of whether it is an app, a retailer, or a service company. For Lulu to shift from its original status, as a “guy-rating app” may not be possible, he continued.

“One has to work very, very hard to be what you should be if that’s what they want to be. Maybe they saw the opportunity to become something else. That’s OK. But you better quickly get into this barge and turn it around. It’s not going to be easy to turn, I would suggest,” Bernacchi said.

“Reinvention may be a great idea,” Bernacchi added. “But in this market place, you get what you deserve. … It’s all about the brand and they have to be very serious about what they want that brand to stand for and I would suggest starting from ground one if they’re not too deep into it. If you don’t define yourself well someone else will and you may not like it, especially if you’re trying to sell something.”

For the record, I signed up on Lulu to surf around the app and see how it worked as part of this essay. And here is what ruined it for me – when I used Facebook to share my personal information with it, one of the first profiles that popped up was of a guy named “Ryan” (I changed it to protect the innocent). Ryan is one of my Facebook friends – and I used to babysit him. Yep, I changed his diapers and everything. And I think of him like a little brother, so I’m pretty defensive when it comes to his reputation.

Seeing girls write cutesy “hashtag” comments about him and his sex life was a bit upsetting. I kind of wanted to stop reading and delete the app immediately from my operating system. Granted, I’m not single and I don’t look at him as a potential mate. But even a few guy friends of mine on the Lulu app got harsh reviews, and I wish I had looked away rather than found out how “#selfish” or “#clingy” they were. Would I come back to Lulu? Probably not. However, I’m also not the target demographic.

Lulu’s success isn’t in question; I think it has already succeeded to a large extent. Whether or not our society deems Lulu’s content credible is the real issue at hand.

 

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

The End of Anonymous Emails?

 

Anonymity on the Internet first became an issue in the late 1980s when newsgroups were created that discussed sensitive or taboo topics (i.e. alt.sex.fetish). Valid email addresses were required for posting on most sites; only a few allowed fictitious email accounts to keep participants’identities safe while they freely discussed controversial opinions. By the time the 1990s were in full swing, newsgroup servers that required valid email addresses for participation were seen as an easy target and overrun by spammers. To thwart the spammers and protect the privacy of their participants, many online venues changed their rules to allow invalid emails and pseudonyms for the purpose of posting.

Not surprisingly, the advent of nameless posting caused a huge online growth spurt, resulting in a tremendous number of people embracing their newfound “anonymous”freedom of expression. While most used their new anonymity wisely and only to protect their own identities, some chose to use this incognito status to vilify or harass personal acquaintances, public figures and anyone else that came into their line of sight. Commonly known as “trolls,” these individuals continue to delight in creating an atmosphere of hate and discontent while hiding behind anonymous email addresses. Activities can range from infrequent off-color remarks to full-blown daily bullying with far-reaching and sometimes tragic consequences. Just this year, a young girl committed suicide after being cyberbullied by anonymous posters on the site www.ask.fm.

That being said, there are numerous reasons people choose to use anonymous email addresses. Some acquire false emails to foil spammers and Internet hackers, or simply to protect their identities when searching for potentially sensitive or embarrassing information. Other individuals desire the privacy an untraceable address confers for darker reasons. One anonymous email site proclaims that their service is the perfect way to catch a cheating spouse; find out if your friends are “real”friends; give warnings to people; play an email joke on your friends; and evade detection if your private email is banned by the recipient. Some of these sites do note that if you use their services for illegal activities (death threats, abuse, slander, etc.), they will publish your IP address. An Internet Protocol (IP) address is a numerical label assigned to any device participating in a computer network that uses the internet for communication. It allows anyone with the address to track information back to the device from which it came. In addition, many anonymous email providers disclose that sending an email anonymously may open you up to a fraud or libel offense, even if you didn’t mean to commit one.

Law enforcement agencies generally encourage anonymous tipsters, but anyone that’s even a little Internet savvy will know that these days nothing online is truly anonymous. While sending an anonymous email to the average Joe might not be immediately traceable, any well-informed individual can create or purchase software that can trace your IP address. And, of course, a court could force it to be divulged.

In 2009, the Manhattan Supreme Court saw a case involving a New York model, Liskula Cohen, who brought a libel suit against someone anonymously posting comments on a blog hosted by Google’s blogger.com. Some comments were defamatory in nature and Ms. Cohen attempted to get a court order to reveal the names of the individuals so she could pursue legal action. Ms. Cohen won when Google was forced to reveal the identity of the pernicious blogger, one Rosemary Port, a Fashion Institute of Technology student who called Ms. Cohen a “skank”and an “old hag”online. Not to be outdone, Ms. Port sued Google to the tune of $15 million, charging that Google “breached its fiduciary duty to protect her expectation of anonymity.”Ms. Cohen ended up dropping her defamation suit after Ms. Port was unmasked, but Ms. Port’s lawsuit fizzled as well since Google was simply acting on a court order.

During the case, Port’s lawyer, Salvatore Strazzullo, claimed that there is an inherent right in the First Amendment to speak anonymously and that should apply to the Internet. He further insisted that blogs are a “modern day forum for conveying personal opinions, including invective and ranting, and shouldn’t be regarded as fact.”

In further response to the rulings in this case, Daniel Solove, a law professor at the George Washington University School of Law, observed that the rules for Internet anonymity are still being formed. In general, a plaintiff must meet certain guidelines before forcing someone to reveal their online identity, but as shown by the outcome in Cohen v. Port, any good lawyer can manipulate those guidelines.

Should this type of anonymity be allowed? The senior editor of Slate, Emily Bazelon, weighs in on the side of full disclosure, stating that, “We so err on the side of 'Oh, free speech, everywhere, everywhere, let people defame each other and not have any accountability for it.' And I think in free societies, that is generally a big mistake. And yes, you can make small exceptions for people who truly feel at risk, like victims of domestic violence are an example, but most of the time it is much healthier discourse when people have to own up to what they are saying.”

In contrast, websites such as the Electronic Freedom Foundation point out that anonymity is absolutely critical to plenty of online users: medical patients seeking advice on coping with a potentially embarrassing disease such as sexual transmitted infections, irritable bowel syndrome and more. The site also claims anonymity is important to businesses that want genuine feedback from customers; people seeking divorce advice; LGBT youth that need help coming out to their parents; individuals seeking mental health support; and the job seeker who doesn’t want his job search to compromise his current employment, among others.

It follows that disallowing anonymous email addresses grants spammers and hackers greater access to your online persona and files. Typically, spammers crawl the web for the @ sign to harvest email addresses. If your email address has been posted online, chances are a spammer will find it eventually. Spammers also use a technique known as “social engineering”to get your friends and co-workers to give them access to their computers. Social engineering usually comes in the form of an email from a known entity with an attachment that, once opened, downloads malware or spyware onto the computer to retrieve their contacts. Besides using computer software and programs that “guess”email addresses, spammers also use “harvesting”to capture email addresses. Harvesting methods include trading or buying lists from other spammers and using special harvesting bots to spider web pages, mailing list archives and other public forums to glean email addresses. The latter is made especially easy when someone forwards a joke or other humorous email and recipients forward it on to their contact lists instead of using “BCC”(blind carbon copy). The cycle of forwarding emails leaves all of the forwarded addresses visible to anyone receiving the message.

To protect yourself from spammers, you can install anti-spam software; set up spam filters on your email; consider not responding or opening suspicious emails; and even develop a tough-to-guess email address by using underscores, special characters and numbers in your address. However, the best protection by far is to set up a disposable (anonymous) email address for use when signing up for online services and newsletters, posting in chat rooms, or posting on blogs. After activating the spam filter on your anonymous account for double protection against spammers, you can simply forward mail coming through it to your primary email account.

These days, people concerned about email privacy issues also use websites like the Tor Project, TailsLinux Journal, FreeNet, HotSpotShield and more to batten down the hatches on their individual Internet privacy. All of these sites offer secure email addresses for the privacy-minded, although now it seems like any attempt at achieving Internet privacy may turn Big Brother’s eye your way. For example, Tor was a project originally developed by the U.S. Navy to assist advocates of democracy in authoritarian states. Since the project has gone mainstream, it is currently being monitored by another Department of Defense agency, the NSA.

Unfortunately, just clicking the links provided in the paragraph above might mark you as a target for surveillance by the NSA. A recent article in Computerworld exposes the truth behind the NSA tracking program XKeyscore, which is being used to track individuals who show an interest in Internet privacy services. According to German sources, “the XKeyscore rules reveal that the NSA tracks all connections to a server that hosts part of an anonymous email service at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in Cambridge, Massachusetts. XKeyscorealso records details about visits to a popular Internet journal for Linux operating system users called "the Linux Journal - the Original Magazine of the Linux Community", and calls it an "extremist forum.” Linux is a computer operating system based on free and open source software development and distribution. The underlying source code may be used, modified, and distributed—commercially or non-commercially—by anyone and as of last year, 95% of 500 fastest supercomputers in the world use this code to operate.

Popular security pundit, Bruce Schneier, further notes that XKeyscore is able to do its surveillance in real-time, lending chilling credence to former CIA system administrator Edward Snowden’s statement about being able to spy on any American in real time from his deck.

So what’s the future of anonymous emailing? Certainly, anonymity in the hands of the unscrupulous can be disruptive, hate producing, and even fatal. Anonymity can threaten our nation’s security and provide a safe haven for evildoers of all types. Yet, the Supreme Court has repeatedly upheld the right to anonymous free speech as protected by the First Amendment. An oft-repeated quote from the 1995 Supreme Court ruling in McIntyre v. Ohio Elections Commission reads: “Allowing dissenters to shield their identities frees them to express critical minority views…Anonymity is a shield from the tyranny of the majority…It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.”

Taking away our power to shield our online existences from intrusion by unprincipled individuals or questionable agencies hampers our ability to protect our most valuable asset: our identities. Although rules should be in place to prevent and punish outright cyberbullying and other disruptive or harmful practices, I believe in our right to safeguard our personal information from the schemes of nefarious online lurkers, governmental or otherwise, should be upheld.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

The Ethics of High-Frequency Stock Trading

 

In the world of business or finance every enterprise seeks a competitive edge to use against its rivals in the quest for profits.

Though not all competitive advantages are legal. A hedge fund portfolio manager, for example, profitably trading equities on insider knowledge, has broken the law; insider knowledge used for stock trading is a competitive advantage, but it's illegal.

What if that same hedge fund trader uses a computer for trading that is smarter and spectacularly faster than others? Computers with these capabilities can automatically execute lightning fast and profitable stock trades in a process called high-frequency trading (HFT).

Individual investors or institutions without access to the expensive, super fast computers are at a serious disadvantage when they trade stocks.

Mid-size and small hedge funds and their traders, financial institutions, brokerage firms and other investors and speculators without access to the expensive, super fast computers are at a serious disadvantage when they trade shares on a stock exchange in which high-frequency trading is regularly conducted.

High-frequency stock trading gives the trader an advantage, knowing the miniscule difference or spread between bid and ask prices. The trades are automatically executed in massive amounts by computer in a mili-second when a stock can be bought for perhaps one cent less than a buy price bid, and then resold for as little as a penny-per-share profit.

Although a mere penny may be involved, the profits can be immense. When trades are made all day, all week throughout the year, the profits add up.

The high-speed computers use algorithms to make the automatic trades. An algorithm is a set of instructions programmed into a computer that makes the computer perform certain actions when specific predetermined circumstances occur.

The computer monitors various stock markets and when a disparity or spread in the bid-ask price of a stock is "seen," the computer instantly executes a trade, buying the stock at the low price and selling it at the high price. The trade is executed at nearly the speed of light, far faster than a mere human being on could do it on the trading floor of an exchange.

According to estimates, the trading volume on U.S exchanges executed by high-frequency computers range from 50 percent to 73 percent. Market research firm, IBIS World, estimated that the high-frequency trading industry is a $29 billion enterprise.

Major stock exchanges, such as the New York Stock Exchange, Nasdaq and BATS, on which high-frequency trading takes place, are among the influential proponents of the practice

High-frequency trading is not illegal, but is it ethical? There are persuasive arguments on both sides of this issue, recently brought to widespread public knowledge by Michael Lewis’s book, Flash Boys – A Wall Street Revolt, and a U.S. Senate hearing on the issue.

Opponents of high-frequency trading, including Lewis, claim that it puts smaller investors at a major disadvantage. Recognizing that the market is rigged against them, the smaller investor loses trust in the market.

Thus, trading volume on exchanges may decline and cause price volatility to increase and liquidity to decrease. Stocks that frequently bounce up and down in price are difficult to sell and their value cannot be accurately determined.

In Lewis’s Flash Boys, he claims the HFT process is rigged against smaller investors and has a negative influence on markets.

According to Lewis and other market experts, high-frequency trading is similar to a stock trading procedure called "front running," which is prohibited by the Securities and Exchange Commission.

Front running is defined as entering into an equity trade with advance knowledge of a block transaction that will influence the price of the underlying security to capitalize on the trade. Traders are prohibited to act on nonpublic information to trade ahead of customers lacking that knowledge. The definition of front running closely parallels the definition of insider trading.

Another area in which detractors of high-frequency trading say it's used to the alleged disadvantage of retail investors are so-called "dark pools."

Dark pools are trading venues in which stocks are bought and sold off the traditional exchanges through high-frequency computers, concealing the trades of participants from the public. Trading anonymously, without the obligation to report trades publicly, gives such traders a major advantage.

Large volume traders who trade anonymously do not have to worry about smaller traders buying the same stocks and therefore driving up the price.

Among large companies allegedly using or having used dark pools according to published reports (MoneyMorning.com and others) are The Goldman Sachs Group, Morgan Stanley and the U.K's Barclays bank.

Recently, the attorney general (of New York filed a lawsuit against Barclays, alleging that the company lied about how high-frequency traders were given preferential treatment in the company’s trading unit. The SEC is currently conducting an investigation of dark pools, to determine if institutions that use them treat all investors fairly and accurately disclose details of their operations.

Although there is no conclusive proof, some market experts suspect high-frequency trading is the reason for the temporary Dow Jones industrial average nosedive of 600 points on May 6, 2010. Dubbed "the flash crash," the sudden drastic decline in the Dow was short-lived and the market returned to its pre-crash levels minutes after it fell.

Advocates of high frequency trading say it provides the necessary liquidity essential to the efficient and reliable functioning of stock markets. Without assured liquidity – the ability of traders and investors to sell their holdings to buyers – markets lose credibility and trading volume declines.

With effortless liquidity and market efficiency, trading costs decrease, say HFT proponents. HFT also narrows the spread between bid and ask prices, and may also reduce trading costs and the impact on share prices that occur when buy or sell orders of large blocks of stock are divided into smaller trades thereby avoiding large movements of a share price either up or down caused by large volume trades.

Statistical data shows that HFT produces profitable trades, perhaps the strongest argument in its favor, at least from the perspective of those who profit from it.

The Senate's Permanent Subcommittee on Investigations recently held hearings on high-frequency trading and its possible conflicts of interest in U.S. stock markets, and its effect on investors loss of confidence.

Among those giving testimony were officials of stock exchanges, brokerage firms, stock market experts and institutional investors. After hearing witnesses, the Senate will determine if drafting regulations and or legislation limiting or regulating its use, imposing regulations or making it illegal may be necessary. And so once more Wall Street practices come under government scrutiny in what seems like the unending conflict between them.

 

Marc Davis

Marc Davis is a veteran journalist and published novelist. His reporting and writing has been published in numerous print and online publications including AOL, The Chicago Tribune, Forbes Online Media, The Journal of the American Bar Association, and many others. His latest novel, Bottom Line, was published in 2013.

Research, Privacy and Facebook

 

"'I observed a mature and initially poised businessman enter the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck who was rapidly approaching a point of nervous collapse. He constantly pulled on his earlobe and twisted his hands. At one point he pushed his fist into his forehead and muttered, ‘Oh, God, let's stop it.’"

An account of torture? Not quite. The quotation here is Stanley Milgram's disturbingly triumphal description of his famous 1963 experiment on the nature of evil. Milgram, a Yale psychologist, was trying to test theories that suggested atrocities like the Holocaust were possible because people tend to obey the commands of authority figures, even if those commands are evil. Milgram had lab-coated assistants order research participants to administer what they thought were dangerous electric shocks. In fact the shocks were fake; the recipients were actors, who pretended to be distressed. Milgram was startled to discover that the participants, told they were participating in important research, would turn the "shocks” up and up until the actors screamed in distress.

Milgram proposed that his experiments, which showed people were willing to inflict pain when ordered to do so, demonstrated how the Nazis had convinced people to commit atrocities. He felt he had discovered a moral truth, but in doing so, he himself arguably violated ethical practices. He had lied to his research subjects in order to put them in a situation where they experienced, on his own account severe emotional distress, even pushing them to the point of "nervous collapse." As a result of Milgram's experiment and ones like it, ethical standards for experiments have been considerably tightened; institutional review boards (IRBs) have to sign off on proposals, and will reject experiments which seem likely to traumatize participants, or which use undue deception. Milgram's experiment would not have passed such a threshold, and could not be conducted today—or so the scientific community likes to think, anyway. Today, social media has created new ways for researchers to follow in Milgram's ethically dubious footsteps. Recently, it was revealed that Facebook had manipulated the News Feed of some of its users — the News Feed being the main scroll of updates and messages that you see when you log into the site. The Atlantic’s Robinson Meyer describes the situation:

For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves.

Meyer says the experiment (published here) is legal given Facebook's terms of service, which gives the company broad rights to utilize user data for research purposes. However, the experiment prompted a major backlash, as users (predictably) objected to Facebook surreptitiously manipulating their emotional states in the name of science. Ilana Gershon, a social media researcher and professor of anthropology at Indiana University, told me that Facebook users were right to be angry. Gershon argues: I think Facebook scientists did do something ethically wrong by experimenting on its users with the intention of producing scientific results without having any oversight on their experiment and without ever giving the users the option to opt out of the experiment. And I also think it is significant that to this day, the users studied still don't know if they were part of this experiment or not. Social media expert and researcher danah boyd expands this critique, arguing that the problem is not merely the manipulation of data for research purposes, but the fact that Facebook routinely manipulates data without user consent input, or knowledge. Boyd explains:

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

Basically, Facebook is always experimenting on you, trying to keep you happy and interested by filtering the information you see without ever telling you that they're filtering the information. Boyd queries, "what would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles?" Should sad or unhappy things be kept from you because they're likely to upset you? Who gets to decide that? An algorithm? Gershon commented, "I find it fascinating that this experiment, the substance of it, is to treat people as manipulable machines– that the experiment suggests that people can figure out an algorithmic way to manipulate others' emotions." The problem is not just Facebook's research methods, but the vision of human nature behind those research methods. Facebook's business model, and the business model of social media in general, views people not as human beings, but as inputs to be exploited and controlled. You can clearly parallel Facebook’s working business model and aggressive manipulative ideal to Milgrams' experiments. In the name of researching how people can be manipulated, Milgram used and exploited people, plugging them into his thesis, and carefully recording their emotional distress. He did conduct follow-up interviews to make sure that there was no long-term psychological damage or distress — and similarly it seems unlikely that the Facebook experiment would result in long-term damage. Even if there is no easily quantifiable harm, there remains a question of whether it's acceptable to treat people as things or lab rats, whether in the name of research into great moral truths or in the name of marketing. The promise of social media is that it will connect us to each other, and make it possible to share more closely with folks we love and care about. The threat, it seems like, is that users of social media turn into a consumable content mill for everyone else; millions of people-engines grinding out insight, uplift, outrage and demographic data at the synchronized command of algorithmic monetization. Facebook's experiment suggests that it sees users as something to be used — which is unsettling, even if they are using us to inflict pleasure on each other instead of pain. 

 

Noah Berlatsky

Noah Berlatsky edits the comics and culture website the Hooded Utilitarian and is a correspondent for the Atlantic. He is working on a book about the original Wonder Woman comics.

 

The New, Improved Digital Watchlist: Herald or Bad Omen?

 

The rapid evolution of data-gathering technology has spurred a debate on many things. The question of whether the age of ‘Big Data’ is also in turn the end of privacy as we know it has been raised time and time again since the revelations made by Edward Snowden about the National Security Agency. But it’s a question worth asking, repeatedly if need be, in order to parse out the answers. Our capacity to gather and interpret data is increasing at an astounding rate. As rapidly as technology has evolved in the past two decades, one must wonder whether or not our ethical systems have the ability to adapt commensurately. Indeed, the issue of privacy has been first and foremost. But another, perhaps more foreboding debate, involves the meaning of words. The Guardian recently reported that the US Government can now brand you as a terrorist based on a single Facebook or Twitter post. What is “reasonable suspicion,” and how can we come to a consensus on such a term? How do we define probable cause in an age where increasingly complex streams of information are readily available?

Through a network called ICREACH, the NSA supplies information on private communications to 23 government agencies. ICREACH is a search engine that operates similar to Google, though it was created specifically for the intelligence community. Over 1,000 analysts have access to the database. It tracks known and suspected terrorists, both foreign and domestic. The FBI shares its Terrorist Screening Database, the master list, with at least 22 foreign governments, in addition to numerous federal agencies. This database is also shared with state and local authorities, as well as private contractors. The biometric data of targets, such as fingerprints and DNA, is routinely gathered, along with receipts, bank statements, business cards, and health information.

The NSA has a long, convoluted history of espionage. During the Vietnam War, the NSA routinely spied on opponents of the conflict, including Muhammad Ali, Dr. Marin Luther King Jr., and numerous U.S. journalists. The agency has also been known to conduct economic espionage. But the practice of data gathering on such a massive scale is a game changer on many levels. It has generated much more specific profiles for the people targeted. Moreover, the sprawl of these programs, in addition to the sheer amount of data gathered, is unprecedented in world history. Never before has a network of U.S. government agencies had access to such a plethora of information about its own citizens. ICREACH is built to share upwards of 850 billion communications, including phone calls, cellphone locations, and emails. ICREACH also has the capability to assign over 30 types of metadata to these communications; for instance, the time and date an email was sent, as well as who it was from, and to whom the message was sent. However, the program explicitly cannot record the content of the message.

In recent disclosures, documents supplied by Snowden revealed that through a program called X-KEYSCORE, the NSA now has the ability to track “nearly everything a typical user does on the internet.” The program facilitates information to analysts through a vast database, containing the browser histories, chat logs, and emails of millions. Under the fourth amendment of the United States Constitution, it is unlawful for a U.S. government entity to perform search and seizure against citizens without warrant or probable cause. So do programs like ICREACH and X-KEYSCORE violate the fourth amendment? There have been differing opinions in the court of law. For instance, U.S. District Court Judge Richard J. Leon, of Washington, D.C., ruled that collecting phone metadata most likely constitutes unlawful search and seizure. However, William H. Pauley III, U.S. District Court Judge for the Southern District of New York, ruled that the collection of such information does not explicitly violate the fourth amendment.

While advances in technology and data gathering methods have made it easier for the government to profile targets, critics have scrutinized the potential for corruption and incompetence in this process. Of the 680,000 individuals on the master list, about 40 percent are reported to have no known affiliation with any terrorist group, yet they are still branded as suspected terrorists. The mere fact that one can be placed on a watchlist based on a single twitter post seems to indicate that the standards are somewhat arbitrary. This is a disturbing insight, given the gravity of being placed on a terrorist watchlist. The consequences of being placed on a watchlist can range from minor inconveniences to decreased employment prospects, to more serious consequences, such as being separated from friends and family members for months at a time.

Airforce veteran, Saddiq Long, was barred from flying to the United States from Qatar to visit his mother, after having been placed on the FBI’s no-fly list. Finally, after a period of months, he was taken off the list and given permission to fly back home. However, he was subsequently placed on another list and barred from flying from Oklahoma City back to his home in Qatar. Long had to travel to Mexico in order to make his return flight. Mistaken identity is a common reoccurring issue. Yahya Wehelie went Yemen to study Arabic in 2008. He had previously been charged with reckless driving and marijuana possession. Worried that he was headed down a dangerous path, his parents wished for him to study overseas. While in Yemen, he studied computer science at Lebanese International University for about two years and got married. In 2010, when Wehelie tried to return home, he was restricted from flying to the United States. Wehelie was placed on a no-fly list due to his casual affiliation with Sharif Mobley, a suspected member of al-Qaeda. Even when Wehelie offered to fly home in handcuffs, accompanied by U.S. Air Marshalls, this offer was rebuffed. He was stranded in Cairo. Wehelie was questioned multiple times, all the while insisting on his innocence. After two long months, he was finally allowed to fly home. His name, however, was not taken off the no-fly list; Wehelie was merely given a waiver.

Critics of the NSA argue that there is no due process involved in this effort to determine potential threats. There is no legal recourse for how one goes about getting their name off a watchlist. In the case of Wahelie, it was a matter of proving he posed no security threat—an impossible task in light of the circumstances—and moreover, the very notion is fundamentally against the foundations of our justice system, which dictates that one be assumed innocent until proven guilty. Arjun Sethi, columnist for The Guardian and legislative counsel for the American Civil Liberties Union, criticized the standards by which both foreigners were placed on terrorist watchlists for having “vague criteria.” There is no clear process for who gets added and who gets removed to terrorist watchlists; presumably, it is subject to the whims of government agents. Furthermore, Sethi argues that a watchlist “saturated with innocent people” draws efforts and attention away from real threats.

While these cases provide an unsettling context for the government’s push to enhance data gathering tools, NSA officials have offered a number of reasons why the methods used to gather information for programs like ICREACH and X-KEYSCORE are completely legal. Much of the information obtained by the NSA is considered public. For instance, a status update entered on Facebook is considered public, provided that the user’s settings are public. Some of the data obtained by the NSA becomes public because it is transmitted through unsecure channels. And while some of the information is technically private, a certain reading of specified provisions in the code of law technically allow for it to be obtained. For instance, when information is transmitted to a foreign service provider, such a transmittal provides the appropriate context for obtaining said information. Another justification commonly used by the NSA is that they are not immediately viewing the data they gather; rather, they are merely storing it. The data is only accessed if proper legal context merits an investigation.

Are the NSA’s seemingly extreme efforts necessary in a dangerous world, one that requires the existence of such programs in order to sustain our nation’s security? Supporters of data gathering programs emphasize the government’s increasing need to keep pace with the ever-changing threats of our time. It’s abundantly obvious that enemies present themselves on digital fronts now more than ever. Terrorists groups commonly use social media tools to spread their message. That very fact necessitates the monitoring of social media on some level. One might offer the argument that the employment of high-tech data-gathering tools may serve towards the greater good of society. For instance, the ability to predict large-scale global events is increasingly becoming a possibility. The claim that many events currently outside the spectrum of government knowledge could be prevented or at least met with enhanced preparation.

Interestingly, it appears as though government officials are weary of labeling any one suspicious based solely on social media posts—at the very least, weary enough to invest time and effort into figuring out a way to make their software more precise. In June, the U.S. Secret Service put out an open request for software to assist in monitoring social media channels. Among the software requested was an algorithm that detects sarcasm in Facebook and Twitter posts. Tools with the ability to recognize sarcasm may work towards a more rational, thoughtful process for recognizing threats. It’s important to note though that the development of this software builds towards a larger purpose; that is, to provide the capability to monitor sites like Twitter in real time, the ultimate goal being the automation of this process. Moreover, in an age where such data gathering processes are increasingly automated, are we taking the humanity out of what should be an essentially human process? After all, we’re talking about a person’s life.

The original title of this piece was “How a Facebook Post can Land You on a Terrorist Watchlist,” but of course the problem with that title is that it isn’t quite clear how one gets placed on a terrorist watchlist. Soon we may be living in a world where an algorithm that can detect suspicious content though social media activity, a world where programs routinely track and monitor individuals throughout their lives, analyzing every Facebook comment, every status update. What is it all for? Is it up to the mere discretion of NSA agents and analysts to figure out what constitutes ‘reasonable suspicion’ in this slippery digital landscape? Currently, there is exactly zero in the way of transparency. The ongoing debate is paradoxically both a means and an end to the issue at hand. There is no endpoint in this process. With the constant and rapid evolution of technological capabilities, standards must constantly be re-evaluated, both for the sake of our liberty and our security. But the government must develop the faculties to handle such rapid evolution in an orderly manner—a tall order for a Congress with both dismal approval ratings and a mind-numbingly unproductive legislative record to match. One thing is clear: the need for reform is critical. The NSA justifies their data-gathering effort through broad interpretations of laws that are perhaps not equipped to serve the public well in the digital age.

 

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

The Cruelty of Looking

 

Editor’s note: In the wake of the scandal surrounding the hacking of nude celebrity photos, the CDEP will feature a series of essays analyzing the actions of the various actors involved. Noah Berlatsky starts off this mini-series by considering the ethical responsibility of those who search for and view the hacked photographs. Later installments will discuss the culpability of the cloud services on which the images were stored as well as the responsibility of the celebrities themselves to keep these types of photos secure. 
Bastiaan Vanacker

Stealing property is illegal and unethical — breaking into someone's house and stealing their iPod is wrong. And buying an iPod that you know is stolen is also clearly unethical, and illegal. But what about simply looking at a stolen iPod online? Does looking, by itself, have moral implications?

If you switch out "iPod" from the preceding paragraph and substitute it with "hacked nude photos," the issue becomes murkier — and more pointed. At the end of last August, the notorious internet forum 4chan published around 200 pictures of nude celebrities, including Jill Scott, Kate Upton and (receiving the most publicity) Jennifer Lawrence. The images were quickly posted on a Reddit message board as well.

Again, there's no question that the theft of these images is a crime; the perpetrators, if caught, could face federal charges including wire fraud and violations of the Electronic Communications Privacy Act. People who searched and looked at the photos, though, won't in general face legal repercussions. But does that mean their actions are morally acceptable?

It's worth noting that there is one case in which viewers of the nude pictures could face prosecution. One of the celebrities whose photos were stolen was a minor at the time the photos were taken; that means the original hackers, and anyone who spread the photos of the celebrity as a minor, could be prosecuted for distribution of child pornography. Moreover, according to the Justice Department: "Federal law prohibits the production, distribution, reception, and possession of an image of child pornography." Simply looking at child pornography is not outlawed, but "possession" is — which means if someone downloaded that image of the nude underage celebrity to their computer, that person could face federal charges.

The rationale for outlawing child pornography according to the DOJ is first of all that images of child pornography memorialize sexual abuse of children, and provide an incentive to perpetuate such abuse by creating a demand for depictions of it. So looking at the images actually encourages sexual abuse of children. That doesn't apply in the case of the celebrity nudes, where no one was harmed in the actual production of the photos. However, the DOJ also says that:

Victims of child pornography suffer not just from the sexual abuse inflicted upon them to produce child pornography, but also from knowing that their images can be traded and viewed by others worldwide. Once an image is on the Internet, it is irretrievable and can continue to circulate forever. The permanent record of a child´s sexual abuse can alter his or her live forever. Many victims of child pornography suffer from feelings of helplessness, fear, humiliation, and lack of control given that their images are available for others to view in perpetuity.

Child pornography is immoral and illegal in part because it involves ongoing violation and humiliation. The circulation of the child pornography also constitutes abuse.

That's an argument that does seem to apply to the distribution, and even to the viewing, of these stolen celebrity nudes. As with child pornography, the abuse suffered by the celebrities is tied to the fact that the pictures are circulated, the celebrities here took these photos for private use; they didn't want them to be public. It seems quite likely that having their images stolen and sent around the Internet could also lead to "feelings of helplessness, fear, humiliation, and lack of control." The distribution of the images compounds the crime, and is part of the crime, which means that looking at the pictures is itself part of the perpetration of the crime as well.

Jessica Valenti at the Atlantic raises the disturbing possibility that for some, the fact that the celebrities in the photos will be humiliated is part of the appeal. “There’s a reason why the public tends to revel in hacked or stolen nude pictures," she says. "It’s because they were taken without consent. Because the women in them (and it’s almost always women who are humiliated this way) did not want those shots to be shared." Looking at the pictures becomes a way to participate in the violation and humiliation of the women pictured.

Playboy writer, Sara Benincasa, reports that many people expressed sympathy for the people who had their images stolen, and were angry on their behalf. Others, though, seemed to enjoy the humiliation and the violation, and were eager to compound it. At worst, she says, some people boasted about masturbating to the images. Others argued that the celebrities were at fault for taking the pictures in the first place, or that celebrities like Kate Upton or Jennifer Lawrence who often pose in bikinis or provocative photos have already sexualized and objectified themselves, and therefore cannot reasonably complain about only-slightly-more-revealing photos.

Blaming the victims in this way doesn't just excuse the viewers of the photos; it places them as righteous punishers. The celebrities have sinned, and the viewer gets to chastise them by gazing upon the evidence of their iniquity. The logic seems all but openly sadistic; the enjoyment stems not only from violating the celebrities, but from doing so in the righteous conviction that the humiliation is well-deserved. In that sense, the photos can be equated to revenge porn, in which nude pictures of women are posted online (often by ex-boyfriends) with the express desire to exact retribution through humiliation and embarrassment. For some viewers, then, it seems like viewing the photos is a way to punish celebrities for being celebrities, or for having sex lives out of the public view, or simply for being women. As Bekah Wells, a victim of revenge porn says:

"When someone shifts the blame to me, do you know what I say? I say, congratulations, because that's exactly what the perpetrator wants you to think. He wants you to think I am a dumb whore who makes poor decisions."

Free speech laws make revenge porn difficult to prosecute. Child pornography on the other hand has been vigorously prosecuted The serious social concern about harm to children presents a strong argument for trumping free speech and criminalizing a relatively passive act like mere possession — though it can also result in situations where parents are criminalized for taking pictures of their kids. That problem would be compounded if we were to apply this logic to the case at hand. You don't want to end up de facto criminalizing sexting between adults. But while legal recourse against viewers of the stolen nude photos may be impossible, and even undesirable in terms of both police resources and civil liberties, it seems clear that viewing or not viewing in this case is a moral choice. If you look at these photos, you're choosing to participate in someone's humiliation and violation without their consent because doing so gives you pleasure. In this case, the act of looking is unethical — and cruel as well.

 

Noah Berlatsky

Noah Berlatsky edits the comics and culture website the Hooded Utilitarian and is a correspondent for the Atlantic. He is working on a book about the original Wonder Woman comics.

The Naked Truth: Cloud Ethics and Personal Responsibility

 

Editor’s note: In the wake of the scandal surrounding the hacking of nude celebrity photos, the CDEP will feature a series of essays analyzing the actions of the various actors involved. Last week, Noah Berlatsky considered the ethical responsibility of those who search for and view the hacked photographs. This week Nikki Williams argues that leak victims misjudged the expectation of privacy that exists in the cloud.

Bastiaan Vanacker

The recent bounty of naked pictures hacked from the accounts of high profile public figures has celebrities and the general public outraged. In a scandal tagged as one of the largest ever of it’s kind, the latest celebrity hacking fiasco has seen the likes of Kate Upton, Jennifer Lawrence, Ariana Grande, and Kirsten Dunst plastered across social media in little or no clothing. Just days after the photos were publicly circulated, Apple CEO Tim Cook responded by revealing that Apple will beef up its iCloud security measures by adding optional two-factor authentication for iCloud accounts. Users who choose this option will have a passcode sent by SMS to their devices after they enter their username and password. This passcode will be required before access to their iCloud account will be granted. In addition, a new organization backed by Google and Dropbox, Simply Secure, also promised to work hard to develop technology that will tighten up security measures across platforms and devices, but that hasn’t done much to lessen the ire of the celebrities involved. Recently, a spokesperson for Jennifer Lawrence stated; “This is a flagrant violation of privacy. The authorities have been contacted and will prosecute anyone who posts the stolen photos of Jennifer Lawrence.”

As Ms. Lawrence’s spokesperson has surmised, although the site the compromising pictures first appeared on, 4chan, crafted an update to its copyrighted materials policies in response to the event, anyone who lifted those pictures from the site during the time they were live still has copies and could still post them far and wide. Forever. Additionally, 4chan’s “fix” is relatively ineffective. Since 4chan’s material is only live on the site for a few hours to a few days, by the time someone notices the content and starts the process of having copyrighted content removed, the original post may be deleted.

So what’s a celebrity victim to do? Calling someone a “victim” implies that they’ve been hurt—mentally, physically, economically or otherwise—by a destructive or injurious action. But It just doesn’t seem reasonable to assign blame elsewhere when that injurious action was caused by the subject’s own negligence.

I don’t think anyone would disagree that we should be able to rely on some type of privacy protection for our personal online accounts. However, one has to wonder at the wisdom of placing at risk types of files and photographs that could cause such far-reaching damage. After all, “the cloud” is nothing more than a very large, shared space. In fact, the internet as a whole represents an area that presumably can be accessed by any number of persons that have the intelligence or perseverance to break through protection composed of code. What people fail to consider is that the internet is terra nova when it comes to having privacy laws on the books. Other bastions of privacy such as banks, brokerages, even the trusty safe found in many homes have been around for ages. The establishments that run or design them have had experience with decades of scurrilous thievery that has helped them perfect anti-theft measures—and even these are not completely reliable.

Farhad Manjoo, a writer for the New York Times, seems to have given a great deal of thought to the question of online security. He writes, “What should smartphone makers do about nude selfies? Should they encourage us all to point our phones away from our unclothed bodies — or should they instead decide that naked selfies are inevitable, and add features to their products that reduce the chance that these photos could get hacked?” Lest you think he’s just grasping at thin air for ideas, Manjoo suggests your phone should give you a warning message when you take an indelicate photo. As an example, he suggests, “It looks like you’ve taken a sensitive photo. Would you like to encrypt it and require an extra password to view it?”

I am not sure I like the idea of Apple, or any other large technology provider, monitoring my behavior and I hope Mr. Manjoo doesn’t think that we need technology companies to step in and protect us from ourselves. Let’s face it, people who take nude photos of themselves are usually not intending to keep them completely private. While I am sure that Ms. Lawrence and Ms. Dunst would have preferred to choose the distribution list for these uncensored pixels, I think that anyone who potentially would be upset by the hacking of such pictures would do well to store them somewhere other than what is, essentially, a shared area.

If you sext, take naked selfies, or leave your credit card lying on a counter at the airport, there is a pretty decent chance that consequences will occur. Yet, one Forbes writer feels that thinking ahead to probable outcomes is impractical and equates asking adults to preserve their privacy by not taking nude selfies to the right wing Christian practice of counseling abstinence. She claims that abstaining from taking and storing risqué photos is not a practical way to keep people from potential harm and that prophylaxis is the appropriate cure.

I agree to some extent. I don’t have a personal problem with people taking nude selfies or having unprotected sex — it isn’t my business. And prevention really is helpful; installing software that encrypts your photos or bolsters your firewall is akin to using condoms to prevent an STD or using birth control to prevent pregnancy. However, just as the best we have to offer in condoms and birth control is not 100% effective, computer security measures shouldn’t be considered completely trustworthy. So if you really, truly, care about having your naked self plastered all over social media, you simply won’t take those pictures at all. If you are okay with the gamble, then you should be willing to accept the consequences, unfortunate as they may be.

Does that mean I think Kate Upton deserves having her intimate pictures made public? Nope. I think it’s a shame and I hope that Mr. Meneses and others that helped retrieve or spread the photos are prosecuted to the fullest extent of the law. I also hope that Apple and other large technology entities will use this opportunity to develop more secure interfaces to protect our personal information in this new, more invasive age. However, I also think that we need to consider what we store online. Since I am at the age where I am more at risk for a lawsuit for retinal burn should a nude selfie of me surface online, I don’t indulge. But I do protect my online presence as best I can and I am sensible enough to understand that there is no such thing as being fully hack-proof. For this same reason, I don’t bank online, although I do use my credit cards with impunity. Even so, I am mindful that any and all of the sites that have encrypted my credit card information may be hacked at any time, so I keep a close watch on my card balances and understand fully the consequences of the gamble I am taking.

Apparently, considering consequences is a debatable notion. Throughout this scandal the celebrities involved have seen an outpouring of support from their peers and fan base. They’ve also seen a fair number of individuals express astonishment at the imprudent practice of storing nude selfies in the cloud. Although this latter opinion is a common theme in the public comments threads of articles surrounding the issue, columnists like Nick Bilton of the New York Times and celebrities such as Ricky Gervais were much maligned for giving voice to this point of view. One article classified people who want to see nude selfie takers claim responsibility as “victim-shamers”. The author suggests that implying the celebrities involved should have been more careful is equivalent to suggesting a woman wearing a revealing outfit is inviting danger. What’s more, she claims that while the leaking of the photos was horrible, asserting that the people involved acted irresponsibly is worse. She notes, “The response to these pictures is terrifying. It is a perfect, encapsulated reminder that your body can be used as a weapon against you. That slut-shaming is so prevalent and accepted in this culture that you could lose your job, or your boyfriend, or your credibility if a photo you once took was stolen from you — and then you will be the one blamed for it.”

While the topic of personal responsibility versus relying on cloud security is being hotly debated online, the fact remains that every security system has a weak point. Clever hackers often spend a lifetime figuring out a way through the “latest and greatest” defenses. So it doesn’t matter whether or not you feel that it’s “fair” for you to have to put an effort into protecting your own privacy; if someone wants what you have badly enough, they’ll find a way to get it. That leaves you with only one surefire defense: not having anything up for grabs.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

 

Can We Blame the Cloud?

 

Editor’s note: In the wake of the scandal surrounding the hacking of nude celebrity photos, the CDEP features a series of essays analyzing the actions of the various actors involved. Two weeks ago, Noah Berlatsky considered the ethical responsibility of those who search for and view the hacked photographs. Last week, Nikki Williams argued that leak victims misjudged the expectation of privacy that exists in the cloud. This week, Mary McCarthy addresses the role of Apple in this scandal.

Bastiaan Vanacker

Starting August 31, 2014, the celebrity nude photo scandal dominated news headlines after a post on 4chan leaked a large cache of private pictures of celebrities. The photos were quickly shared on social media sites, most notably Twitter. In one of the early reports on the scandal, Gawker reported, “Posters on 4chan and Reddit claimed that the celebrities were hacked through their iCloud accounts, though that hasn't been verified, and the method is unclear.” Details were sketchy when the story first broke. Although the photos were removed rather quickly after threats of lawsuits, the viral cycle of social media sharing had already done the damage and the photos were spread widely on the Internet.

Initially, “the Cloud” was blamed for the invasion of Hollywood privacy; the finger-pointing at Apple was a knee-jerk Orwellian response, an Eggers-esque convenience for the masses who clearly don’t understand the technology involved. In fact, no one was as quick to blame the hackers, as they were to blame the devices. But is Apple really responsible for the damage caused by the hackings? PC Magazine reported that security experts were theorizing celebrities were hacked while accessing an open Wi-Fi network at the Emmy Awards event, making their usernames and passwords more vulnerable to an attack.

So Apple didn’t waste time responding. In a statement three days after the photos were published, the company said:

“We wanted to provide an update to our investigation into the theft of photos of certain celebrities. When we learned of the theft, we were outraged and immediately mobilized Apple’s engineers to discover the source… we have discovered that certain celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions, a practice that has become all too common on the Internet. None of the cases we have investigated has resulted from any breach in any of Apple’s systems including iCloud or Find my iPhone. We are continuing to work with law enforcement to help identify the criminals involved.”

But the scandal came at a bad time for Apple. With the new iPhone 6 scheduled for a splashy release only a week later, Apple couldn’t take a chance customers would cancel pre-orders for expensive new devices. Just a day before the September 9, 2014 product release announcement, and only ten days after the celebrity photo scandal, Apple added a layer of iCloud security to its devices.

According to Macrumors, Apple had begun sending out email alerts when personal iCloud services were accessed from Internet browsers, so users could be notified when someone unauthorized tried to access their accounts. It seemed clear that Apple was making frantic efforts to pay attention to not only the security of its iCloud services, but also to the public’s level of confidence in them.

In another public relations outreach resulting from the celebrity photo hacks, Apple CEO Tim Cook gave an interview that appeared in the Wall Street Journal of September 5, 2014 to tout the new security measures. He was clearly in defense mode in agreeing to the interview, since typically Apple releases all its own news. Walking the line between blaming celebrities and ensuring Apple wasn’t blamed, Cook made it clear that the breach was a result of hackers obtaining user IDs and passwords, and not from any security failure on the part of Apple and its servers.

Although Apple apparently wasn’t to blame directly for the leaks, it took the road of ensuring and even adding a layer of security for their customers. While Cook noted that Apple added Touch ID fingerprinting password technology when the iPhone 5S came out, he emphasized that the company continues to improve security, requiring multiple layers of sign-in authorizations. Cook stressed that the key issue the company planned to address revolved more around the personal or “human” measures versus simply the technological implications. He admitted the company should be held accountable from the standpoint of making information available to customers so they are aware of how easy it can be for hackers to attack their accounts if not properly protected by excellent passwords.

“When I step back from this terrible scenario that happened and say what more could we have done, I think about the awareness piece,” He said. “I think we have a responsibility to ratchet that up. That's not really an engineering thing.”

Apple emphasized to a number of media outlets that the company continues to work with law enforcement agencies to identify the hackers and their methods for obtaining the private data. A new round of naked celebrity photos was released on September 21, 2014 in what had evolved into an ongoing FBI investigation after the first breach. There has been little news reported regarding the discovery of the hackers who initiated the large-scale hacking, with no major follow-up articles on case progress or results, and there doesn’t seem to be a clear end to the release of additional photos.

Apple is correct in acknowledging that it has a responsibility not just to protect celebrity nudes, but to distribute information so that all customers know the importance of password protections. It may be the bright side to the hackings: the public recognition that “the Cloud” needs to do more to protect privacy. The public has a right to know that the devices on which it spends hundreds and sometimes thousands of dollars in a given year are safe from being violated by savvy digital predators. Children use these devices. Adults should be able to use them in whatever way they wish without concern for privacy invasion. It’s reminiscent of the Spiderman philosophy: “With great power comes great responsibility.” Apple can and should do everything in its power to ensure the privacy of its loyal consumers.

 

Mary T McCarthy

Mary McCarthy is Senior Editor at SpliceToday.com and the creator of pajamasandcoffee.com. She has been a professional writer for over 20 years for newspapers, magazines, and the Internet. She teaches classes at The Writer’s Center in Bethesda, Maryland and guest-lectures at the University of Maryland’s Philip Merrill College of Journalism. Her first novel The Scarlet Letter Society debuted this year and her second novel releases in 2015.

Progress or Problem: The Complex Ethics of mHealth

 

Many nations are currently facing the question of how to provide affordable, efficient health care access. Currently, low and middle-income countries (LMICs), with their many rural and hard-to-reach communities, are at the forefront of concern for world health officials, as they continue to suffer from a host of life-threatening conditions. In fact, although the World Bank notes that the mortality rate for children under five worldwide lowered from 12 million in 1990 to 6.6 million in 2012, the numbers remain excessively high in LMICs.

Some of the most prevalent conditions are preventable with education, decent standards of living, and a sturdy health system. While standards of living and health care access have seen only minimal improvements in LMICs over the last decades, mobile phone coverage in these areas has grown exponentially, rising to nearly equal the population count in many countries. It makes sense that a health technology that is able to capitalize on widespread connectivity has proliferated quickly. In a short time, it has gained support from health care professionals and institutions concerned with reducing global poverty, as well as the public at large.

This rapidly emerging technology is known to the world as mHealth (mobile health), and it comprises solutions not just for developing countries, but also for highly developed countries like the United States that are beginning to struggle with the economic and logistic issues associated with providing equitable healthcare for all citizens. mHealth technologies include a variety of tools from software (“apps”) to wearable sensors and portable electronic devices. Although varied in nature, they have several common elements: to assist with health intervention, management, and treatment through a caregiver’s or patient’s personal mobile device.

A look at the stakeholders in the mHealth process is essential in order to answer any ethical questions raised by this technology. To begin with, developers of medical software have a vested interest in keeping abreast of advances that streamline processes and lower costs. mHealth’s strongest selling point is it’s innate ability to systemize and simplify through on-the-go patient services. However, mHealth app creators are not covered by the Health Information Portability and Privacy Act (HIPAA) and lack incentive to provide robust security for patient information. When Kevin Johnson, a self-described "ethical hacker," and the CEO of the network security consulting firm Secure Ideas reviewed a patient care record app, he discovered a fatal flaw: since nothing was encrypted, it was simple for Johnson to get a close look at all personal information for any patient using the app. Upon notification, the app creator refused to correct the issue, citing that they weren’t required to be HIPAA compliant.

Johnson points out that other apps residing on your personal device can “steal” information from your health app unless permission blocks are encoded in the design of the app. Some very commonly installed apps, such as several versions of “Flashlight, can access information resident on your device. They can modify or delete the contents of your USB storage, receive data from the internet, take pictures and videos, test access to protected storage, and change system settings. Furthermore, health apps that use text messages to remind patients to take medication can expose patient data to anyone within sight of the mobile device when the text comes through, leaving patients at risk for discrimination. Until software developers have an incentive to provide integrated and exhaustive security, your personal health information may be at risk.

Pharmaceutical companies are another sector with a clear incentive to get in on the ground floor of this emerging interface. A white paper published by Booz & Company noted that digital technology can increase patient motivation and commitment through a patient engagement platform (PEP), and that “the right application of digital technology can also create new opportunities for pharma companies to revamp their business model.” One of the PEPs outlined in the paper includes items such as a wireless blood pressure monitor, passive and active tracking devices, and recommendation engines, electronic health records, and clinical decision tools.

However, there is a potential for conflicting interests when a medical professional uses an app developed by the pharmaceutical company to determine the best plan of treatment for a patient. We mustn't forget that the drug company's primary goal is to sell its drugs. In addition, some companies, like Pfizer and Sanofi-Aventis, have had to issue app recalls after algorithms in the app resulted in incorrect markers for disease activity. Unfortunately, the recall only removes the app from app stores to prevent new downloads—it doesn’t remove the app from devices on which it is currently resident. Pfizer did send doctors letters advising against further use of the app, but there is no way to tell how many doctors received and acted upon the notification.

These failures carry a potential for injurious impact on the individual because of the possibility of incorrect treatment. These harms can include the administration of unneeded medications or performance of non-essential procedures as well as mental anguish due to the emotional trauma of a faulty result. Failing FDA intervention and subsequent regulation, medical professionals should not allow diagnoses through health apps to outweigh or override other options. In all cases, a failsafe in the form of alternative testing and traditional patient interviews should be used to prevent errors.

Health insurers also are a major player in the mHealth community. Kaiser Permanente has over 9 million members, making it one of the largest not-for-profit managed care and health care providers in the United States. In an effort to realize the benefits of mHealth technology, Kaiser recently created free apps for Android and iOS. These apps connect its members to a mobile optimized website so they can access their personal health information from any location and at any time. In 2011, Kaiser’s mHealth apps provided online results for more than 68 million lab tests. These apps also allow patients the ability to access their diagnostic information, email their doctors, schedule appointments and refill prescriptions 24 hours a day, seven days a week. To help patients feel safe, Kaiser has implemented stringent security measures including user authentication, automatic log-out after a period of inactivity, and secure internet connections. They offer further protections by maintaining that personal health information will remain on Kaiser Permanente’s secure servers as opposed to mobile devices. When a patient logs into her account, however, a home screen shows allergies, a health summary, immunizations, ongoing health conditions, and past visit information at one quick glance. One glance can certainly be revealing, and potentially damaging, in the short time before an inactivity sensor would log the user off.

With outpatient care comprising more than 40 percent of hospital revenue, it should come as no surprise that medical care providers are a pivotal piece of the mHealth puzzle. mHealth devices can relieve the patient overload in medical offices by taking over some of the simpler office procedures: blood pressure checks, heart monitoring, weight management, lifestyle tracking, and even glucose monitoring. They also allow doctors to “go mobile” so they can more effectively provide care in outpatient or roaming clinics and hard-to-access rural areas. When it comes to mobile patient information, however, the security is only as good as the person using the device. A study published in the Journal of Medical Internet Research showed some troubling tendencies among doctors who used mobile devices. Of the 98 percent of study respondents that owned mobile phones, 86 percent used their personal devices for patient-related communications during clinical rotations and one-third had no security feature in place. Sixty-eight percent agreed that there was a privacy risk when information was shared between colleagues via personal device, and yet 22 percent of 96 respondents still used their device to transmit patient data. Naturally, using mobile devices to track health care information can put patients’ information at risk, especially if the device is also a personal device, or it is shared among physicians and clinics.

Beyond security issues, doctors and clinicians using mHealth devices face difficulties resulting from the medium itself. Mobile device screens can be small, making details of x-rays and other tests difficult to discern with accuracy. Also, there can be reliability concerns relating to internet connectivity and signal strength as well as loss of data from device crashes that can put doctors in the field at a disadvantage.

The final, and most important component of the mHealth arena is the patient. mHealth can improve patient motivation to comply with physician’s prescribed protocol through reminders, rapid feedback, and connection to an in-network social group. In LMICs, mHealth can bring health and hygiene education, on-the-spot assistance and coaching for birthing situations, and access to complex diagnostic tools. mHealth has also been praised for being able to give patients in hospitals privacy and time to rest while reporting their vitals seamlessly to clinicians in another room, and for allowing chronically ill patients the luxury of staying at home rather than in a hospital to undergo their treatment regimen. While these features seem enticing, it is important to note that, in the case of device failure, a patient with no physical monitoring may be at greater risk for adverse events.

While some patients feel empowered by the idea of mHealth, others display greater feelings of risk by sharing their personal data in an online environment. A study reported in the Journal of Medical Internet Research indicated that in the sample group of low-education, English-speaking consumers, younger patients liked the idea of patient portals where current medical information, lab results, appointment scheduling, and health proxy abilities would be at their fingertips. Studies showed older adults were more concerned about privacy and less satisfied with receiving lab results from a patient portal.

mHealth has been touted as a panacea for crowded doctor’s offices and the time-stretched physician’s tendency to rush patients through an appointment. But perhaps instead of adding value to the health care scenario, mHealth diffuses the available health providers’ time even further. Is it ethical for a doctor to create a care plan for a person he may never have met? Surely, in the case of LMICs, some care is preferable to none at all. However, in the developed world, mHealth may be like water over the already eroded rock of physician accountability, and without oversight protocols in place, it may put further strain on doctor-patient relationships.

As use of mHealth grows, national governments, medical and health care organizations, patients, professional organizations, and consumer groups will need to carefully monitor all aspects of the system to ensure privacy is maintained and patients’ rights are preserved. Fail-safes and testing stipulations should be included to decrease the likelihood of failure due to “buggy” apps, faulty design, and user error. More importantly, strict oversight will help prevent a boon for LMICs from becoming the bane of world health providers and consumers alike.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

The Most Ethical Way to Stream Music: Not At All

 

Fifteen years ago, one word changed music forever. All together now: Napster.

It was the summer of 1999. Teens John Fanning, Shawn Fanning and Sean Parker invented the peer-to-peer file-sharing service so people could swap mp3s. Napster wasn’t the first P2P service, but it was the first music-focused one.

Napster was the first tidal wave to erode the concept of paying for music. Suddenly, you didn’t need to buy an actual CD anymore but could download the songs you wanted. For free. Is it stealing if what you’re taking is intangible?

“Heck yes,” said Metallica, among the first to sue Napster when the band’s unreleased songs leaked to a radio station. Or as Metallica drummer Lars Ulrich said, “If we are going to sell our music on the internet, in whatever way we so choose, we cannot do that if the guy next door is giving it away for free.” By the time Napster went bankrupt in 2002, the damage to the music industry was irrevocable. “The ease of being able to obtain music for free instantly has diminished its value,” wrote Pitchfork Review editor Jessica Hopper in a recent Napster retrospective.

There will always be those who defend Napster and getting music for free (or what the RIAA considers copyright infringement). “If you’ve bought an album on vinyl, why should you have to pay for it in digital format?” goes one argument. “Most music royalties only fill the pockets of record executives, not the actual artists,” goes another. The latter isn’t factually wrong – musicians only see about 10 cents from every dollar iTunes purchase, according to The New York Times.

Still other proponents of free music say that services like Napster give indie bands under the radar greater exposure, which could theoretically help them tour and make money off of merchandise. After all, writers like Richard Menta credited Napster with helping Radiohead’s album Kid A debut at No. 1 on the Billboard charts when it was released.

But many of us believe musicians should be compensated for their music. Fairly compensated. In the digital age, what does that mean?

Music services like Spotify, Pandora, Rdio, and Deezer license music from artists and pay them per listen. Is one better than the other? Do any of them pay musicians what they used to earn? (No.) If you love music, want musicians to be paid, and want your money to go to a company employing a fair and ethical business model, which streaming service should you pick? It’s a tricky question, because it’s hard to track exactly how much artists get.

“Instead of buying your music, you pay a subscription fee that is in some way filtered down from Spotify to record label to artist, based on some opaque algorithm of pay-per-play, which is based on some opaque deal struck between the label and Spotify, and then the label's opaque individual contract with each of its artists,” wrote Paul Miller on The Verge.

Other factors include fluctuations in the streaming site’s revenue and whether there are other people in the band to split royalties with. So this is hardly the definitive guide to ethical music streaming, but hopefully it can give you more to think about. The short version is if you like it, buy it; don’t stream it. More on that below.

The Best: Nokia, Google Play, and Xbox Music

Some hipsters may find it hard to believe, but Zune (now Xbox Music), the wildly unpopular iTunes rip-off from Microsoft, might be one of the most ethical choices for streaming music, at least by payment figures.

David Lowery, a guitarist and songwriter formerly of the bands Camper Van Beethoven and Cracker, compared the numbers on, The Trichordists, his website devoted to protecting the rights of artists on the internet. According to his calculations in February 2014, Xbox Music was in the top three, paying artists three cents per stream. Google Play is even better, at $0.0457 per stream. The best is Nokia, at slightly above 7 cents a stream.

As The Trichordist explains, music-streaming platforms without a free version generally pay musicians more because they have more revenue. Although supporting Microsoft, Nokia, and Google Play may give you pause – they’re a far cry from mom-and-pop record stores –they might actually get more pennies to your favorite band. The key word is pennies.

Pretty Bad: Spotify and Pandora

Created in 2008 and launched in the U.S. in July 2011, Spotify streams music with a “freemium” model. If the ads interrupting your music every few songs annoy you enough, you can pony up $10 per month for a premium membership. Each song a premium user plays gets the artist about half a cent, as of 2013. But if you’re a free user, each song you stream can give as little as one-twentieth of a cent ($0.0005) to the musician. Theoretically, the more users who go premium, the more money artists can make. But until then, musicians are making almost nothing.

Consider cellist Zoë Keating, who caught the attention of The New York Times: “After her songs had been played more than 1.5 million times on Pandora over six months, she earned $1,652.74. On Spotify, 131,000 plays last year netted just $547.71, or an average of 0.42 cent a play.” Scott Timberg of Salon wrote this summer that Keating earns six times as much from iTunes song sales than she does from her music streaming.

Spotify especially hurts jazz and classical musicians, as they’re on indie labels that don’t have deals with Spotify and are unlikely to see the massive plays of pop stars. Jazz pianist Jason Moran likens Spotify to the Walmart of the music industry, sending smaller parties into extinction. Yulun Wang, who co-founded jazz record label Pi Recordings, had to opt out of Spotify entirely. “What we found when we got out of Spotify...was that our sales went up; they absolutely jumped,” he told Salon in July.

But if you’re looking for an alternative to Spotify, don’t choose Pandora. Not only does Pandora not let you hear the exact song you want (you get something similar; it’s more like internet radio), but the company supported the Internet Radio Fairness Act, a decidedly unfair bill The Trichordist describes as “nothing to do with internet radio or fairness and everything to do with screwing artists out of 85 percent of their royalties.”

If you aren’t convinced Pandora’s all that bad, one of Lowery’s headlines says it all: “My Song Got Played On Pandora 1 Million Times and All I Got Was $16.89.”

Bottom Line

Unfortunately, for now it looks like “streaming music ethically” is an oxymoron. As Gryffin Media points out, listeners would have to stream an artist’s songs 36,866 times a month for him or her just to afford ramen noodles. David Lowery therefore argues that we need to start looking at the human cost of all the free stuff we can find online.

Idealists might look to Lyynks Music, a startup that “hopes to replace unethical and extremely artist-unfriendly problems associated with Spotify.” Launched in late 2013, the site says musicians can set their own prices for song streams, but it doesn’t look like major musicians are actually on there. Today, perhaps faced with the difficulty of taking down Spotify, Lyynks seems more focused on helping up-and-coming artists find an audience. Lyynks doesn’t say how much it pays artists either, just vaguely states: “The share given to the artist exceeds the amount we have seen in other content distribution models.”

As a listener, I’m left wishing one streaming service would rise to the top and differentiate itself with ethical practices. (Nokia is a somewhat underwhelming winner for artist payment.) As Paul Miller wrote on The Verge, “What I really want is some sort of ‘free range’ sticker slapped on my music consumption, so that I know the artist was ethically treated in this transaction.” For now, it looks like that free-range sticker is looking beyond streaming services. Sure, use them to discover new music, but then buy music directly from artists whenever possible and support them on tour. It’s not a fun, flashy answer, but there it is.

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.

Social Media Scanning: The Next Generation Privacy Threat?

 

Advertisers have always tried to get into the minds of the masses in order to sell more products. Demographic studies, focus groups, phone surveys and observation were the primary tools prior to the advent of Internet technology. But in late 1990s, the boom of online shopping on the worldwide web changed the landscape of marketing psychology by giving advertisers a new ability to track consumer habits through page visits, search fields, and online comments. In the past, consumers would agree to be tracked in exchange for a prize or discounts on shopping. In the present day, some retailers still offer loyalty card discounts to give their marketing folks an in-depth look at your shopping habits. But that's a far cry from the way the newest web-reliant data collection technologies can peer into your personal habits to build a profile.

Although consumer information discovery has always had its detractors, a newly developed data technique, social media monitoring, troubles many consumers and watchdog groups alike. Typically, social media monitoring is described as collecting or “scraping” available information about Internet users from sites like Facebook, Twitter, YouTube, Tumblr, discussion boards, blogs and more. Companies are keen to access this type of material because it offers a snapshot of the consumer when he is unaware of scrutiny, offering the most authentic customer opinions. Cheaper and less cumbersome than research methods like surveys and focus groups, social media monitoring has become a panacea to companies in today’s highly competitive economic environment.

Unlike older methods of monitoring such as loyalty cards, social media scanning catches many consumers off-guard. According to a survey by Consumer Action, half of the individuals polled thought that such tracking was illegal, and more than a third are unaware of the extent of online tracking. Given those numbers, it’s probably safe to say that most people are not expecting the level of data depth that Ditto Labs’ software, the newest entry into the online tracking world, can offer marketers. Ditto Lab’s unique software analyzes photos from social sites using background detection and geocoding information. The program can identify the photo location and can also analyze faces and assign them a “facial mood score” (FMS) to determine the emotion being portrayed in the shot. In addition, logo detection capability is provided to discern if there are any branded items in the picture. With the ability to scan for up to 2,500 items in every photo, Ditto Labs’ product can give marketers a detailed outline of a person’s daily habits, inclinations, and travels.

But does this actually impact your privacy? Scanning your private photos for clues about what you're eating or drinking, what activities you like and whom you hang out with certainly seems invasive. But the fact is, even code from Facebook’s “Like” button and Twitter’s “Tweet” can associate your identity with websites you’ve visited, giving marketers a look at your online habits. In truth, many data collectors view social media posts as public and not subject to privacy considerations. They further contend that if the data collected is for a group of individuals that are not personally identifiable then there is no invasion of privacy, even though it may “feel” like it to many people. Their primary argument is that the information collected is anonymous. As defined by the Federal Trade Commission (FTC), "anonymous" in the online data collection world means that there was no access to PII (personally identifiable information). Unfortunately, the new software by Ditto, as well as other emerging technologies, can harvest PII. Personal information is being collected and shared, regardless of what data collectors claim. For example, Facebook gives information on users who click a “Like” button to advertisers, LinkedIn sells information about everything you click on to other users of the site, and firms who purchase user information from Twitter sell it to analytics firms.

A definite ethical problem also occurs when companies spy on social media sites to tailor pricing to individual customers using geolocation. In the social media world, one’s location can be quite easy to track with apps like Four Square on Facebook and the location feature on Twitter. Ditto Labs' software can take geolocation on-the-fly, finding you as you post pictures from your grandma's house at Thanksgiving, the Riviera on vacation, or your hotel on your latest business trip. The Wall Street Journal found many large retailers such as Home Depot, Staples, Rosetta Stone and Discover Financial Services offered discounted or inflated prices depending on customers’ online habits and physical location. This locational profiling results in what economists term “price discrimination” and offers retailers a perceived unfair advantage, allowing them to discern customer loyalties and willingness to spend in advance of offering a price. For example, a consumer located in a large metropolitan area where prices are relatively high will be more likely to buy an online item at an inflated price than someone from a geographic area with a lower cost of living. Online companies can tailor their prices to each consumer based on their social media-based location.

Another issue is that companies that collect data through social media can combine that data with court records, income data and tax records on individual consumers to get an identifiable profile. This industry, called data brokering, is growing by leaps and bounds and is rife with duplicity. An appeal for transparency and accountability by the Federal Trade Commission (FTC) investigation resulted after investigators found that “data brokers collect and store billions of data elements covering nearly every U.S. consumer.”

Even more concerning was the unanticipated uses of data collection. The FTC noted that, although the category of “biker enthusiasts” might enable a motorcycle manufacturer to offer discounts to bikers, it can also be used by life insurers to flag consumers that exhibit risky behavior. Data of this type is considered a gray area for groups who gather data from social media sites using software like Ditto’s current offering.

Software of this sort, with its ability to peer into the everyday activities of ordinary people, opens the door to some pretty significant privacy issues. Perhaps most notable is the question of whether government or law enforcement agencies are using this amped-up scanning software to find or track persons of interest. Based on my research, the answer is a resounding "yes". One social media analysis firm, Snaptrends, presents evidence that these type of scanning systems are already in action. The Guildford County Sheriff's Office in North Carolina patrols social accounts to locate parties and have used this information to charge over 230 teens with alcohol and drug related offenses. It's a booming market—there are various companies that sell social media scanning software directly to law enforcement agencies. For example, one product, BlueJay by BrightEvents, is used by many police departments to scan tweets for location and download instant photographic evidence during a disturbance. Using even more sophisticated scanning products, like Ditto's, could give these agencies an insidious and possibly deleterious influence on casual social interaction. Besides the obvious problem with this kind of “stalking” by the authorities, there is the question of what effect this online observation will have on an individual’s perceived freedom of speech. As a practical matter, it would make it easier to have a jesting conversation misconstrued to the detriment of both the online participants and the authorities.

Much of the "creepiness" of social media scanning software like Ditto’s derives from evidence that it identifies individuals, rather than groups of subjects, in stunning detail. An email to Ditto asking whether their software aggregated or individualized information resulted in a response centered on the fact they exclusively analyze public information on social media sites. They offered a link to an article to further explain their stance, but the article also fell short of a direct answer. Their position, which appears to be that anything in the public realm is fair game, was echoed in the words of their chief marketing officer, Mary Tarczynski. She notes that Ditto is, "looking at public photos, so anyone who wants to keep their information more private certainly has ways to share things just with their friends without [the photos] being available to Ditto.” In other words, batten down the privacy filters or prepare to be "scraped."

For their customer, Kraft Foods, Ditto Labs uses software to detect patterns in behavior, such as what drink people like best with their macaroni and cheese dinner and where they are eating. Based on photo location and content, Ditto then categorizes photos into groups such as "foodies" and "sports fans." It's not evident what happens with the data next, but it is clear that some organizations misuse the information they receive, regardless of their promise to protect it. For example, one marketing analytics startup, Piqora, was found in violation of Pinterest's rules of image use for the practice of collecting competitor's photos in a graphical dashboard and keeping them indefinitely. So, perhaps the problem is not scanning software per se, but what marketing teams are doing with information gleaned from it.

The long and short of it is, if you don't want marketers analyzing your social media images you might want to forgo posting on social media. Even if you are proactive and read up on your social media sites' privacy policies, much of the language is ambiguous or non-existent, making it difficult for social media users to understand. A recent Wall Street Journal article addresses the fact that third-party usage of posted photos is not correctly noted in many sites' privacy policies. They also don't address the practice of caching (storing images or posts) in their guidelines and many use nebulous language like "reasonable periods" to define how long developers can store photos. Even when rules are clear, some companies find a loophole or just ignore them. Companies can have any number of policies, rules and regulations, but if they choose to violate or ignore them, your information will be up for grabs.

It is apparent that data obtained through scanning software is susceptible to misuse. Despite mobile marketers' assertions of self-regulation through organizations such as the Digital Advertising Alliance, it is plain that once your information is out there, it can be used to assemble a personal profile and aggressively market you or even be sold to a third party. The FTC makes an attempt at consumer protection by recommending that individuals have access to collected data and are allowed opt-outs from sharing. They also seek to require companies to inform consumers that inferences are being drawn from raw data about them and allow them to correct false data. In regard to sensitive data, such as health information, the FTC affirms that there should be a written consent before it is collected and shared. Regardless of their assertions, the FTC has only developed guidelines, not provided legally enforceable measures—it is clear that consumers whose photos are audited by Ditto Labs are not aware of the scrutiny.

Until there are detailed laws regarding social media scanning, the pictures you post might give clues to your brand loyalties, health, food preferences, marital status, leisure activities, family status and more. Even with legislation in place, you can't rely on businesses to treat your information appropriately. Scanning software that can check your online posts for evidence of alcohol, cigarettes, prescription (or other) drugs, brand preferences and lifestyle choices can give a frighteningly candid picture of your lifestyle, perhaps to your detriment. Many marketers and businesses hide behind the current trend that makes anything in the public realm fair game. But the deeper issue is the expansion of what comprises public space. Legislative organizations need to scrutinize and, perhaps, redefine “public” to exclude those online social spaces that are used for intimate exchanges among invited users so individuals can feel free to be their authentic selves without fear of leaving traceable personal details.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

 

Yik Yak’s Growing Pains

 

If Facebook is the king of social networks and Twitter the queen, then Yik Yak is Twitter’s immature younger sibling – ambitious, but with no hope for a seat at the throne.

Like Twitter, users of Yik Yak post and view very short messages: 200-character “Yaks” instead of 140-character “Tweets.” Users can reply to posts and even “upvote” or “downvote” posts they like or don’t like – a feature likely inspired by its cousin Reddit. Yik Yak, however, is a mobile app with no functionality as a website. But that’s not the only distinction from its relatives.

First, Yik Yak is location-based. Users write posts and reply to those from others within a 10-mile radius. They can also drop a pin elsewhere on a map and “peek” at Yaks within a 1.5-mile radius of the pin. However, they can only post to their own location.

Second, Yik Yak is anonymous. There are no usernames, passwords or profiles. People sign up with their cell phone numbers and provide no other personal information. Users don’t collect followers or friends – all their posts are automatically available to everyone.

Third, right now Yik Yak has a specific audience: college students. While many over the age of 25 have not even heard of the app – let alone used it – it has spread to more than 1,000 college campuses since its launch in November of 2013.

For that reason, the live feeds tend to revolve around sex, drinking, studying, and dorm life. Some are funny, others sad. Many are profane or mundane.

The Yik Yak website features a few samples, cute quips such as:

“Hey teacher. I'm not asking you to spoon feed me, but don't give me one chopstick and tell me to eat chili.”

“There's a kid in the library using Christmas Lights as an extension cord.”

“Spooning my boyfriend. Out of the container. It's ice cream.”

But for a more authentic assessment, download the app and look at a real feed. Some recent examples from the North Side of Chicago, likely penned by DePaul or Loyola University students:

“Am I the only girl who would love the idea of a three way…”

“I just want weed and someone to make me laugh.”

“Damn. I’m hungry.”

Yik Yak’s founders – two fraternity brothers who graduated from South Carolina’s Furman University in 2013 – have secured $11.5 million from investors this year to expand their brainchild to a wider audience. They told Forbes that they want to make it a source for serious, on-the-ground breaking news. But one of its key features – anonymity – stirs up ethical issues that will obstruct that goal.

In fact, anonymity has already hurt the app’s reputation by making it a venue for cyberbullying. Vicious gossip and cruel lies about specific people surfaced last spring, especially among young teen users, prompting outcries from parents, school administrators, and the media.

Though the app’s terms of use bans those under age 17, Yik Yak has little ability to enforce that rule on an individual basis. In response to protests, the company’s support team now builds geofences, virtual perimeters surrounding real geographic spaces, around high schools and middle schools that prevent the app from functioning on their property. That solution does not protect those of age from victimization.

Users can flag inappropriate posts for the support team to remove, but the app does not promise to respond to flags within a particular timeframe. A defamatory Yak sent at 10 p.m. on a Saturday night would likely have plenty of time to fuel gossip before tech support steps in.

The company has made good efforts to combat the bullying problem, but it hasn’t stopped users from turning to the app to spread unsavory content like sex tapes or from posting bomb and shooting threats. Call it “shouting fire in a crowded theater” 2.0. Yik Yak does not collect personal information, but in cases like these it has and will hand over GPS coordinates and IP addresses to law enforcement to help track down people who use the app to break the law.

It’s important to note, however, that the majority of Yaks do not harass specific people nor provoke public alarm.

Two college students interviewed for this story summed up their perceptions of the app.

“People just post dumb stuff,” said a female senior at the University of Iowa over the phone. “It’s like a low-brow version of Twitter.”

What’s the point?

“There is no point,” she said.

A male sophomore at DePaul University said he’s “obsessed” with Yik Yak. He struggled to explain why.

“People post stupid stuff,” he said. “It’s funny.”

Does that entertainment value outweigh Yik Yak’s potential to harm?

Because it’s anonymous, users feel comfortable writing statements they would not say if their names were attached to the sentiments.

There are certainly situations where anonymity is necessary. Many newspapers give journalists permission to reference anonymous sources in a story in order to protect their source’s jobs – or even, in extreme cases, their lives. There are everyday examples, too: Advice column readers send letters about personal problems and sign them “Lonely in Chicago” or “A Nosy Neighbor.” In both of these scenarios, the nameless remain so to protect themselves from harm – or at least embarrassment.

The same rationale could, of course, be the motive behind some Yaks. But in practice, the app users more often use anonymity to say something crass. They wouldn’t want all their Facebook friends – mom included – to know they hooked up with three guys last night, so they post it on Yik Yak.

Those who truly seek catharsis through this app have other outlets. They can mail in a Post Secret or talk to a trained mental health professional. And those who simply want to disseminate a clever quip should not feel uncomfortable associating their name to the message via Twitter or Facebook.

Ignoring the actual content of the messages, there are a couple of practical perks to anonymity. It’s quick to sign up for the app, first of all. No need to fiddle around with combinations of letters and numbers to find an available username. Users don’t have to pause even a minute to type out their personal information – name, date of birth, and so on. But more noteworthy than a few precious moments saved is the fact that Yik Yak can’t sell users’ personal data without their permission. The company doesn’t have them in the first place.

What Yik Yak has that other prominent social networks do not is an easy, automatic location feature. (Twitter can filter by location, but it requires opting in and an advanced search.) On Yik Yak, users can quickly find out what’s going on where they are – or anywhere else they want to snoop.

Perhaps it’s not surprising then that the founders of the app want to use this location feature to grow their product. After all, people are already turning to social networks for news. Thirty percent of adults in the United States get news from Facebook, according to Pew Research analysis.

But Yik Yak may need to sacrifice some of its anonymity to fulfill ambitions of becoming a source for reputable news coverage. Users who value truth may demand to know who is providing the information they read because, naturally, people who attach their name to their words are more credible.

By forgoing anonymity, however, Yik Yak will likely lose the audience that has made it a contender in the social network family in the first place. Perhaps the solution is creating two versions of the app: one where users must provide their name and contact information and one where they don’t. The identified – let’s call them Yiks – can write about news from the frontlines while the nameless – Yaks – can stick with dorm room hookups.

Or maybe the easiest, and most lucrative, solution for Yik Yak is to sell their GPS technology to Twitter.

 

Codes of Ethics for Online

 

 

The Internet has made reviewers of us all. Every positive or negative consumer experience can be shared immediately. The law of averages suggests that in the long run these reviews, when taken together, will provide an accurate reflection of consumer experiences. However, that does not absolve individual reviewers of certain ethical standards and obligations.

With 85 percent of consumers reading Yelp, reviewers have an obligation to be honest, disclosing any bias or conflict of interest. Online reviewers may not be bound by the Association of Food Journalists’ code of ethics, which states that writers must use their real names, fact-check their info, and visit a restaurant multiple times—or even the Food Blog Code of Ethics, but online reviewers should adhere to the following ethical standards:

  1. Disclose clearlyif you received payment, freebies, or other compensation
  2. Don’t praise a business if you’re personally connected

The FTC requires bloggers to disclose any compensation. To not do so is breaking the lawbut, interpretations of that vary. One amateur fashion blogger I follow indicates clothes were gifts with a small “c/o [brand]” at the end of the post in question. But “c/o” might not be understood by casual readers to mean, “They sent me this, so I’m giving them free publicity.” I know a blogger’s gotta eat, but reviewers should go out of their way to be transparent. Even if you technically aren’t breaking any laws, being shady about free stuff is a good way to ruin your reputation and lose readers.

Yelp reviewers should adhere to the same standard of disclosure, but sometimes the lines are blurry. Yelp Elite members, for instance, regularly are invited to restaurants and bars for free tastings. I’m one such member, but I’ve never gone because it felt underhanded to me – “There’s no such thing as a free lunch” and all that. At my first event last month, I asked more experienced Yelpers if we were, indeed, expected to write glowing reviews in return for free mini-pies and cocktails (even though that had never been explicitly stated), and the sentiment was yes. A simple: “I ate for free, thanks to a Yelp event,” in a review would probably suffice, but it still made me uncomfortable.

Whole Foods’ CEO, John Mackey, caught a lot of flak in 2007 when it was revealed that he’d been extoling Whole Foods on Internet forums for the past eight years under the name “Rahodeb” (an anagram of his wife Deborah’s name). The New York Times reported that Mackey wrote more than 1,100 posts on Yahoo Finance’s online bulletin board, many gushing about Whole Foods’ stock prices. (He even complimented his own appearance when another commenter insulted it, writing, “I like Mackey’s haircut. I think he looks cute!”)

An NBC broadcast at the time mentioned the Securities Exchange Act of 1934, section 10 of which prohibits “fraud, manipulation, or insider trading.” Mackey’s actions certainly seem to have been manipulative. As former SEC chair Harvey Pitt told NBC, “There’s nothing per se illegal about [Mackey’s actions], but it’s very clear to me, if you’re not willing to do it under your own name, you really have to ask why you’re doing it at all.”

Mackey later defended himself by saying, “I never intended any of those postings to be identified with me.” Well, obviously. Yet the truth came out, making Mackey look dishonest, untrustworthy, and morally suspect. Not only was it unethical, but it also tainted the Whole Foods brand. Translation: If your sister’s vintage boutique needs more customers, don’t write her a fake Yelp review.

  1. Similarly, don’t smear a brand or restaurant out of spite
  2. Realize the ramifications your review could have

Mackey also dissed Whole Foods’ then-competitor Wild Oats in 2005, writing this on a Yahoo stock message board as Rahodeb:

Would Whole Foods buy OATS [Wild Oats’ stock symbol]? Almost surely not at current prices. What would they gain? OATS locations are too small...[Wild Oats management] clearly doesn’t know what it is doing...OATS has no value and no future.

Only two years later, Whole Foods was buying Wild Oats for $565 million in a merger challenged by the FTC. Was Mackey trying to drive people away from Wild Oats so his company could buy it at a cheaper price? Only Mackey knows, but it doesn’t make him look very good.

On a personal level, a former friend of mine was bitter toward a former employer after she quit. “I’d write them a bad Yelp review, but they’d know it was me,” she said before giving me a meaningful look. “YOU should write one about how they’re so awful!” she said, only half joking, even though I’d never used the company’s services. (I didn’t.)

I can’t be the only one this has happened to. In fact, a look at any business’ filtered Yelp reviews – the ones you don’t see unless you go hunting for them and enter a Captcha, a type of challenge-response test used in computing to determine whether or not the user is human), shows that fake bad reviews (likely by competitors and their families) are all too common. If you’ve got beef with a company, personal or professional, don’t let it color your actual experience there. No inventing cockroaches to scare diners away from your rival café! Sure, write a negative review if you received legitimately poor service or products, but take it up with management if it’s a bigger issue.

At the risk of sounding overdramatic, these are real people here; real families’ livelihoods are at stake. In an earlier piece for the Center for Digital Ethics, Kristen Kuchar noted that the addition of a single star in a Yelp rating could potentially boost business by 5 to 9 percent, according to Harvard Business School. Should one irate customer have the power to get a waiter fired or put a bistro out of business? Your words can live on even if you delete your review, and you never know the impact they could have on someone down the road.

The consumer mindset means we think opening our wallets entitles us to the royal treatment, but remember that everyone has bad days. If at all possible, visit a business more than once to get a more fully rounded view of the experience there. If nothing else, it’ll make your review more helpful to others.

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.