×

Navigation

“Come as You Are?” Moral Concerns with Facebook’s Identity Policies

 

According to Instagram, her name is “Lil Miss Hot Mess.” It also is her Twitter handle, her YouTube tag and her stage moniker. It also is the reason why Facebook kicked her off its platform for a short time.

The San Francisco-based drag queen and friends protested Facebook’s decision, going to traditional media and online news sites to explain how their “names” were under investigation by the popular social network. As a result, there were meetings, Facebook posts by both the company and the protestors and lots of discussion across the blogosphere.

The question of how people should be identified on social media has become a hot topic in multiple communities, including transgender and Native American groups. Both of which felt under fire when Facebook started questioning the names people posted on the site and whether they were “real.”

These questions – which are still under debate between Facebook and these crusaders – bring up an important issue when it comes to online identities and social media names. Should someone have to use the moniker that is listed on their Social Security card when they go online? Should alternative names be permissible on sites that seek to provide “safety” for their users, such as Facebook says in its usage policies? Where are the lines between someone’s virtual self and the name that shows up in the address line of her utility bill?

It brings up concerns not only for people who use stage names, but also for people whose very identities are in flux or may be changing for a variety of reasons. Perhaps their name might be one thing on their driver’s license, but their identity – their very sense of self-worth and connection with the rest of the world – may be something very different. Should social media sites, which are about creating community and communicating with the world around you, accommodate these sensibilities?

The story starts years ago when Facebook established its name policy. According to the site: “Facebook is a community where people use their authentic identities. We require people to provide the name they use in real life; that way, you always know who you're connecting with. This helps keep our community safe.” There is no further information on what “safe” means. Facebook is publicly traded, and it has to answer to investors. As such, it makes sense for the company to establish set policies. On its end, Facebook believes that it must seek out truthfulness in its membership, whose numbers help determine its advertising rates and the like. These are issues that investors would want to discuss, and these have been already disclosed in Facebook’s public filings.

But the naming debate heated up to the point of national discussion about seven months ago with Sister Roma, Lil Miss Hot Mess and other self-identified drag queens. They found themselves facing deactivation on Facebook because of the names on their profiles. While Facebook allows some naming leeway on their pages, the site requires individuals to use their “authentic identities” on their profile pages. The issue became more complicated when a number of Native Americans stepped forward publicly to say that they too had found themselves shut out of Facebook when their names triggered an alert via the site’s naming policy.

As for Lil Miss Hot Mess, she wrote that she truly felt the loss of Facebook as a place to share her thoughts and dreams.

“As much as I roll my eyes at our collective obsession with Facebook, watching my online identity vanish hit me hard. It’s true: You don’t know what you’ve got till it’s gone. To have years of photos, conversations and other traces of a life lived digitally suddenly disappear felt like an erasure of my very existence,” Lil Miss Hot Mess told Salon for a September 2014 essay. “If Facebook really wants to provide a safe place for authentic social interactions, it needs to revise its policies and procedures to let all users define realness — starting with our names — for ourselves.”

When pressed by these groups and other sources, Facebook offered up some context within a statement on its website, known as a “note” on Facebook, from Chris Cox, the company’s chief product officer. In his statement, Cox writes that the company hopes to avoid bullying, trolling and intolerance on its platform via its naming policy. Cox did not offer any real-life examples of this issue in that 2014 update; it would have been helpful from a user’s standpoint to read an anecdote as to how Facebook “protected” people because of their decisions.

According to Cox, Facebook believes in its overall policy for two reasons: “First, it's part of what made Facebook special in the first place, by differentiating the service from the rest of the internet where pseudonymity, anonymity, or often random names were the social norm,” Cox wrote. “Second, it's the primary mechanism we have to protect millions of people every day, all around the world, from real harm. The stories of mass impersonation, trolling, domestic abuse, and higher rates of bullying and intolerance are oftentimes the result of people hiding behind fake names, and it's both terrifying and sad. Our ability to successfully protect against them with this policy has borne out the reality that this policy, on balance, and when applied carefully, is a very powerful force for good.”

While Facebook did open discussions with Lil Miss Hot Mess and others, those meetings apparently have not ended the consumers’ concerns. Within the past few months, activists including the drag-queen community and others have petitioned against Facebook to create event sign-ups or be used at community events. They have taken to other websites to raise their continued concerns, noting that early agreements to review and resolve their issues have either not been implemented or seem to be ignored by Facebook entirely. The issue – as noted by hashtags such as #‎MyNameIs and #NoPrideForFacebook – seems to be one that will continue over the long term.

As an individual who uses Facebook personally and professionally, I’d have to side with the activists who want some further review of the naming policies at the company. There are good reasons why people’s names are a constant and evolving part of themselves. Would Facebook call Caitlyn Jenner into question if she were to update her personal profile to reflect her gender transition? Would it require some kind of evidence such as a magazine subscription or a driver’s license?

Having some leeway between what is current and what is happening in the months and years to come seems not only important but necessary. Social media is not a job application. It is not a passport application. For all intents and purposes, it is a place where people talk about their grandchildren, post kitten videos and reunite with high school acquaintances.

Because it has become part of our collective everyday life, one could argue that Facebook has an obligation of sorts to work with its users. It needs to have more than a single meeting or coffee date with its community. There needs to be more than a poll posted on the top of its site – but it hasn’t even done that in terms of its naming policy. Sure, it can pin a note to the top of its site to ask you to make donations for international emergencies, but it seems shy to share what is happening behind the curtain internally. It does feel like Facebook prefers to operate quietly rather than publicly in terms of these policies.

Whether Facebook redeems itself in the protestors’ minds is left to be seen. What remains for certain is that social media, despite their seemingly ubiquitous part of our modern-day lives, remain a relatively new form of communication that still needs to be studied and refined. We as a people use names for a variety of reasons – and we change them regularly both through simple means as nicknames or more dramatic ones such as in the transgender community. If Facebook seeks to be the meeting place for its users, it needs to serve their needs and allow them to share their identities in very real ways.

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

 

 

Altered Image, Vanished Trust: Photojournalism in the Age of Digital Manipulation

 

Every day, the trusting public looks at the work of photojournalists online, in magazines or in newspapers, assuming those visual representations of news events are truthful. These all-important images can inspire, spark debate or incite anger, action or even rebellion.

So what happens when an image is changed, whether through setting up a scene or through (digital) manipulation? There are dozens of software applications that can easily change a photograph to show whatever the creative mind of the manipulator wants it to show, for good or for ill.

The question of how a photo should be – if at all – manipulated for public consumption is debated appropriately through a recent photography show in New York City. The Bronx Documentary Center hosted a curated display entitled, "Altered Images: 150 Years of Posed and Manipulated Documentary Photography,” which garnered national attention for its tricky yet important subject matter.

According to organizer and photographer Michael Kamber, he created the exhibit to show some of the most controversial examples of manipulated photojournalism. These photos ranged in time from as early as the American Civil War to this year’s World Press Photo contest. In that prestigious contest, some 20 percent of participants were disqualified for digitally altering their submitted images.

“The World Press Photo Contest must be based on trust in the photographers who enter their work and in their professional ethics,” said Lars Boering, the managing director of World Press Photo, in a statement about the contest controversy. “We now have a clear case of misleading information and this changes the way the story is perceived. A rule has now been broken, and a line has been crossed.”

A line, indeed, has been crossed. A photo cannot be changed digitally in any way other than cropping it for size for it to be considered true, accurate and fair to the viewer. Period. There is no way around this if you want to gain or keep the public’s trust and respect.

One thing that was notable from the exhibit is that several of these manipulated photos were caught because someone other than the photographer altered them. The photographer noticed the change right away in these cases and contacted the editor or publication to report the problem. But once the image is out there for the public’s consumption, the damage is done.

As a news reporter with more than 20 years of experience, I can say that I have worked with some of the finest photojournalists in the nation. I consider my years at The Detroit News among my most enjoyable, especially because of the photographers I worked with. The Photo Desk had a standard that was never doubted or questioned: You never set up a photo. Never.

What does “set up a photo” mean? It meant that you didn’t send a news photographer to a ribbon cutting; that wasn’t going to end up in our newspaper. You didn’t tell a source to prepare a “fake” scene for the photographer to capture. You didn’t give the photographer a set-up moment to show off the story’s central theme. If the story didn’t happen when the photographer was there, there was no story. A photo had to happen naturally – like the photographer was a fly on the wall and observed things as they happened, capturing the image as if you and that photojournalist were there together watching the story unfold.

I believed in that mantra then, and I still believe in it now. I trusted every image that I saw in the newspaper then, and I want to believe in every image that I see now. But when you see the problems that have come up within the photojournalism world because of digital manipulation, you see why this trust has been shaken.

If you think that I’m taking too strong a stance, let me back it up with comments from three photographers that I have worked with on a regular basis, all of whom say the same thing. If you see their photos, you should trust that they have been created honestly and without digital alteration. If you’re creating art, that is one thing and some changes from the original photo are to be expected. However, and they could not have been more adamant about this, if you are purporting to be a photojournalist and presenting news, that is something entirely different.

Jessica Muzik comes to the subject from two points of view. She is the Vice President of Account Services for Bianchi Public Relations, Inc., as well as the owner of Jessica Muzik Photography LLC. Her photographs have been published both online and used by news organizations.

“I don’t think one can lose more credibility as a photojournalist than to alter or set up photos,” Muzik said. “The public trusts photojournalists to capture real moments and timely events, not to compromise their ethics by altering an image to fit the needs of a particular media outlet.”

“In my line of work, I always say that what the media reports is considered 10 times more credible than any advertisement that can be placed because we trust that the media are objective in all matters and that includes photojournalists,” Muzik added. “If a photojournalist feels the need [to] alter or set up an image, that is not photojournalism, but rather photography.”

Asia Hamilton is the owner of Photo Sensei, a company that offers photography workshops to professionals and amateurs in several cities. Her goal in part is to help people in image-sensitive cities, including Detroit, show off their photo skills with respect to themselves and the community, demonstrating both their creativity and the city’s best assets.

Because Detroit often gets a bum rap when it comes to its “ruin porn,” or images of the city’s abandoned or burned out buildings, Hamilton often works with people to find other ways to highlight Detroit via her Photo Sensei classes. Thus, she too has a tough stance when it comes to manipulating an image within the news realm.

“I think photo altering is ok if the photography is art or editorial related,” Hamilton said. “However, photojournalism should not be altered because it is a documentation of facts. The news can only be trusted if it is completely factual.”

My favorite comment came from John F. Martin, a news photographer who has a commercial business that does work for news agencies as well as corporations.

“Staging or otherwise manipulating an image from a news event is lying, plain and simple. It's no different than a writer making up a quote. This was instilled in us on day one of journalism school (Ohio U, '96). It turns my stomach when I read about these seemingly increasing incidences,” Martin said.

That’s the crux of the problem, isn’t it? Photo manipulation has happened too much and too often. That’s reprehensible and cannot be allowed to stand. The situation has grown so dire that for-profit businesses have been established to find and expose photo manipulation.

The company in question is called Fourandsix Technologies Inc., and its founder Dr. Hany Farid recently introduced a new service, Izitru. Its purpose is to allow anyone who puts their images online to prove without a shadow of a doubt that these images are authentic. They can do this by allowing the photos to be tested, thereby receiving a Izitru “trust rating” for any viewers.

Yes, the world of photojournalism has come to that—a trust rating—frightening and unacceptable.

 

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

Bridging the Digital Divide

 

With computer literacy becoming an increasingly important skill in college and the workforce, middle schools and high schools across the nation must prepare students to meet modern expectations. This is a major challenge for underfunded districts where money is sparse and the costs of equipment, high-speed Internet and training seem out of reach. But the digital divide, the gap between those who have access to computer technology and those who do not, will not go away without school investment. Although funding for critical programs is often at stake, school district representatives must decide whether it is their ethical responsibility to integrate computers into their curricula. Due to the growing digital access gap, the answer to that question should be a resounding ‘yes.’

The myriad costs associated with providing classroom computers prevent many budget-strapped administrators from adopting a new, technology-focused teaching approach. It is understandably difficult for school administrators to ask teachers to utilize digital tools if investing in technology means putting off faculty raises. Unfortunately, the digital gap will continue to widen if it is not addressed, and administrators need to prioritize the use of modern equipment.

Nice computer labs are no longer the bar in most middle schools and high schools. It is the norm for teachers to incorporate technology into the classrooms, allowing students to participate in lessons by working on computers individually or in groups. Many schools are also transitioning to one-to-one (1:1) curricula wherein all children are assigned a laptop or tablet they can use in the classroom or at home. To truly prepare students for the road ahead, schools must move beyond labs and into teaching spaces, encouraging instructors to assign tasks that involve computers.

Long-term training is the most important outcome of computer-rich learning environments. But digital investments also pay off in the short term, as teachers benefit from organizational tools, immediate feedback and ‘differentiated learning’ applications. For example, teachers can customize lessons to meet each student’s level of understanding by asking the class to watch videos or complete assignments and answer questions on their own. Depending on their speed and results, software and online applications will provide students with new content, give them time to complete work or offer review assistance on confusing sections. Accounting for the individual needs of students is much more difficult when one message applies to a diverse classroom.

Low-income schools need more than equipment to close the gap, so it is crucial that administrators introduce digital learning practices quickly. In addition to computers, they must provide high-speed Internet that can handle modern digital requirements. Schools need significant bandwidth for students throughout the school to go online, participate in interactive assignments and watch videos at the same time. With fewer than 20 percent of educators believing their schools offer Internet connections that satisfy their scholastic needs, connection is a problem, especially in rural areas where Internet Service Providers offer few affordable high-speed options. According to the Federal Communications Commission, 41 percent of rural schools could not obtain high-speed connections if they tried.

The federal government has stepped up and made significant strides to help underprivileged schools obtain high-speed Internet and digital learning tools. In 2013, President Obama introduced the ConnectEd initiative to provide teaching assistance and high-speed Internet to schools and libraries across the country, paying particular attention to rural regions. He lauded North Carolina’s Moorseville Graded School District and its Superintendent Mark Edwards for adopting a digital curriculum despite limited resources. Several years after Moorseville schools provided a device to each student in grades 3-12, their graduation rates increased by 11 percent. Although the district ranked No. 100 of 115 in terms of dollars spent per student it had the third highest test scores and second highest graduation rates.

Investing in eye-catching technology and updating curriculums is not easy for all districts. The combined costs of new equipment, high-speed Internet and teacher training are difficult to cover when schools have other issues to address. As Kevin Welner, a director of the Education and the Public Interest Center at the University of Colorado at Boulder stated: “If you’re at a more local level trying to find ways to simply keep from laying off staff, the luxury of investing in new technologies is more a want than a need.”

This is an understandable problem, but seeing technology as a want rather than a need is an outdated mindset. School districts should make digital curricula a priority rather than a novelty. When budgets cannot be rearranged, schools can join together to purchase equipment at reduced bulk pricing and seek funds from communities and businesses. Major companies, such as Best Buy, offer direct assistance to schools while various nonprofits partner with corporations to secure academic resources.

Before they decide to put off investments in digital education, district superintendents should remember that they face numerous obstacles as they work to close the digital divide. Underprivileged students who are not exposed to digital education in the classroom are likely to be hampered by limited equipment and Internet access at home. According to the Pew Research Center, approximately one-third of households with annual incomes under $50,000 and children between the ages of six and 17 did not have access to high-speed Internet. Any computers help, but schools are the primary point of exposure for many students, making it particularly important to have well-equipped classrooms and trained teachers.

Occasional investments in computer labs only scratch the surface of the problem; so sporadic splurges offer limited results. To bridge the divide, district representatives must dedicate themselves to bringing computers into the classroom. Those who prioritize other issues are not being fair to today’s students. Los Angeles Unified schools Superintendent Ramon Cortines made headlines in February when he reversed his predecessor’s popular promise to give each student, teacher and school administrator an iPad. “We've evolved from an idea that I initially supported strongly and now have deep regrets about,” he stated, adding that a more balanced approach to spending was necessary. Ultimately, both the iPad initiative and faculty raises were put off, and, more importantly, one more class of students failed to receive sufficient digital training.

Integrating technology needed to bridge the digital divide should be a priority for administrators in all middle schools and high schools. Despite the numerous financial problems in low-income areas, school and district administrators must realize that digital curriculums are an ethical priority, even when it means putting off other school problems and seeking out outside revenue. When you’re on the wrong side of the gap, digital access is a major burden, and schools need to chip away at its ongoing cost.

 

Paulina Haselhorst

Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.

Consorting with Black Hats and Negotiating with Cybercriminals: The Ethics of Information Security

 

October is National Cyber Security Awareness Month, and the fact that it’s a month instead of a day speaks volumes about the growth and prevalence of cyber crimes.

The international security company Gemalto proclaimed 2014 as the year of mega breaches and identify theft. According to the company’s breach level index:

- 1,023,108,267 records were breached in 2014

- There were 1,541 breach incidents, which represents a 78 percent increase in breached records from 2013

How frequently are data records lost or stolen?

2,803,036

every day

116,793

every hour

1,947

every minute

32

every second

 

North America accounted for 76 percent of total breaches.

And 2015 is shaping up to be another stellar year – it has already produced high-profile security breaches involving Ashley Madison, CVS, Anthem, and even the IRS.

So far, the Ashley Madison hack has been the most high-profiled breach of the year. Ashley Madison is a social-networking site for married men and women looking to find partners for extramarital affairs, and claims to have 40 million users. To date, the site’s hackers have released seven years of credit card data, in addition to names, addresses and phone numbers - and the users’ desired preferences in potential partners. This breach has resulted in public embarrassment, marital strife, possible blackmail situations, and at least one suicide.

And while the breaches themselves are highly publicized, much less is known about the people behind the scenes who are charged with protecting company data, their responses to data breaches, and the ethical decisions they face.

A 2015 report by Alien Vault, a threat intelligence and security management provider, shines a spotlight on the many issues facing security professionals. Below are the responses to three questions selected from the survey portion of the report, along with ethical analyses of the respondents’ answers.

Question 1: Do you ever visit hacker forums or associate with black hats to learn about the security you need?

51%

Yes

48.3%

No

 

Javvad Malik, the report’s author, notes that some companies forbid interactions with black hats. A black hat is a computer hacker who breaks into computers and networks for malicious reasons, as opposed to white hats (who may be employees or consultants) who break in to locate and identify breaches. However, if the type of information needed to mount an effective defense is not available through legal channels, roughly half of respondents feel they need to do whatever is necessary to obtain credible data in a timely manner.

I spoke with Abraham Snell, who has an MBA in Technology Management from Auburn University and is a Senior IT Infrastructure Analyst at the Southern Company in Birmingham, Alabama. He views visiting hacker forums or consorting with black hats as an instance in which the means justify the end. “It is a brilliant idea,” Snell said. “It is just the reverse of criminals getting police best practices so they can be more successful criminals. In this case, the side of right is learning about the dark side before they strike. In some cases, this will be the only warning of things to come.”

Question 2: What would you do if you found a major vulnerability on a company’s system or website?       

61.7%

Privately disclose to them

12.0%

Publicly fully disclose

9.8%

Disclose without releasing details

9.5%

Do nothing

8.2%

Tell your friends

5.5%

Claim a big bounty

2.5%

Sell on the black market

 

While privately or publicly disclosing the vulnerability seems the most logical choice, it is not uncommon for companies to threaten legal action against the person reporting the security risk. Fortunately, only a small percentage of respondents would seek financial compensation, but it is troubling that almost 18 percent would either do nothing or just tell their friends. However, if companies provide a hostile environment in which this type of disclosure is not welcome, can security professionals be blamed for their lackadaisical attitude?

According to Snell, there are definitely ethical issues involved in the next steps taken when a vulnerability is discovered. “Even if this type of disclosure is not welcome, you have a moral obligation to reveal the vulnerability,” Snell said. “If the information is breached, people may have their financial and personal information stolen, even their identities may be stolen. If you fail to sound the alarm, you’re just as guilty as the people who actually steal the information because you knew it could happen and you did nothing.”

After viewing the other choices selected by respondents, Snell said they are negligent at best, and most likely criminal in most states. “Telling your friends, unless they are security experts or regulators, is the same as doing nothing,” Snell said.

Regarding the bounty, Snell said, “I’m unclear on how you claim a big bounty unless it becomes a major international issue because companies will not pay their own employees to do what they are already paying them to do.” And if the employee tried to claim a bounty anonymously, that could lead to various legal implications. “The vast majority of people who do what Edward Snowden did end up like he is … a man without a country,” Snell said. He also explained that selling the info on the black market is both unethical and illegal.

Question 3: If your company suffers a breach, what is the best course of action?

66.8%

Use the event to convince the board to give you the budget you need

25.7%

Tell the regulator, pay the fine, and move on

9.0%

If nobody knows, just keep quiet

6.6%

Go to the media and brag about how you ‘told them so’

 

Overwhelmingly, the survey respondents feel that the only way they can get the resources they need is in the aftermath of a major cyber attack.

In fact, former White House Cyber Security Advisor Richard Clarke once said, “If you spend more on coffee than on IT security, you will be hacked. What's more, you deserve to be hacked.”

Darryl Burroughs, deputy director of IMS Operations for the City of Birmingham, Alabama, shared an interesting perspective with me: “If a compelling case was made to increase the cyber security budget and the company blatantly refused to do so, the ethical dilemma rests with the Chief Financial Officer and others who make budget decisions that do not take into consideration IT requests,” Burroughs said.

He added, “The real question is what unethical decision did they make when they funded something less important than the company’s security?”

And that’s also a question that Sony’s Senior Vice President of Information, Jason Spaltro, has likely asked himself over and over again. Back in 2007, Spaltro weighed the likelihood of a breach and concluded, “I will not invest $10 million to avoid a possible $1 million loss.” At the time, that may have sounded like an acceptable business risk. However, in 2014, when the company’s data breach nightmare dominated the headlines - and late night talk show monologues - for months at a time, that $10 million would have been a sound investment.

Snell said there are a lot of factors that determine if using a breach to increase the budget is ethical or not. “I wouldn’t say most companies wouldn’t increase the budget anyway, but I would say that many current and previous executives are not trained in technology, so the threat of security breaches is not a topic that resonates with them,” Snell said.

As a result, he thinks that in many cases it takes a major incident to get funding funneled to the right programs that will protect the company. “The problem with security is that it is mainly a cost when things are going well.  You only see the wisdom of the investment after a breach occurs or is attempted.”

On the other hand, Snell said if the budget is adequate, and fear mongering is being used as a tactic to get more money, that is definitely unethical.

Negotiating With Cybercriminals

The process of retrieving stolen data from cybercriminals is another ethically murky area for security professionals. A recent whitepaper by ThreatTrack Security reveals that 30 percent of respondents would negotiate with cybercriminals for data held hostage:

However, 22 percent said it would depend on the stolen material. Among this group:

- 37 percent would negotiate for employee data (social security numbers, salaries, addresses, etc.)

- 36 percent would negotiate for customer data (credit card numbers, passwords, email addresses, etc.)

I also spoke with Dr. Linda Ott, a professor in the department of computer science at Michigan Technological University, who also teaches a class in computer science ethics, about negotiating with cybercriminals.

As with most ethical questions, she does not believe there is a simple answer. “One might argue that a company should be responsible for paying whatever costs are necessary to recover the data since it was presumably because of the company's negligence that the information was able to be stolen,” Ott said.

She explained, “However, unlike paying a ransom for the safe return of a person, the return of the data does not guarantee that the cybercriminals no longer have the data. And if they have a copy, paying the ransom merely amounts to enriching the criminals with no gain for the company whose data has been compromised.”

However, Ott noted that in certain situations the case for paying the ransom would be stronger. “For instance, if the company did not know what employee information was compromised, one might argue that they should pay for the return of the data,” Ott said. “In this scenario there is a benefit to the victims of the crime since they could be accurately notified that their information had been stolen.”

Big Brother: Friend or Foe

ThreatTrack’s survey also reveals a range of opinions regarding the government’s role in cybercrime extortion investigations:

- 44 percent said the government should be notified immediately and granted complete access to corporate networks to aggressively investigate any cybercrime extortion attempts

- 38 percent said the government should establish policies and offer guidance to companies who fall victim to cybercrime extortion

- 30 percent said companies should have the option of alerting the government to cybercrime extortion attempts made against them

- 10 percent said the government should make it a crime to negotiate with cybercriminals

Ott said the fact that most companies do not want government intervention is problematic. “Without government investigations of these matters, the cybercriminals remain free to continue their illegal activities,” she said. “This can ultimately lead to the theft of information of many more people.”

However, she explained, “Companies tend to do their analysis based on consideration of the impact on their reputation and the potential impact on their stock price, etc.  They have little motivation to consider the bigger picture.”

So, how long did it take you to read this article? If it took you five minutes, 9,735 data records were lost or stolen during that time frame. That’s why Burroughs concludes, “The question is not if you will be breached - the question is when.”

Terri Williams

Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.

Crime and Punishment: The Criminalization of Online Protests

 

Online petitions, boycotts and speaking out on social media are common ways to raise your voice about a particular issue or individual. But a more controversial method, hacktivism (hacker+activism), has been increasingly employed to further agendas. Hacktivism is defined as hacking, or breaking into a computer system, for political or social ends, and it is currently illegal. Proponents claim hacktivist actions mirror real-world protests but incur harsher penalties because they are carried out in the online environment. Are they right and are hacktivists indeed treated in a way that violates our notions of justice and fairness?

The Computer Fraud and Abuse Act (CFAA), also known as the “anti-hacking law,” was created in 1984 to criminalize unauthorized access to computers. Since then, the law has been modified five times, with each modification resulting in a broader definition of what constitutes “unauthorized access.” Opponents of the CFAA argue that the expansion potentially regulates every computer in the U.S. and many more abroad. Intentionally vague language within the law allows the government to claim that something as minor as violating a corporate policy (as in the case of the United States v. David Nosal) is equivalent to a violation of the CFAA, putting even minor offenders at risk for serious criminal charges. But is comparing hacktivism with real-world protests an apples-to-apples equation?

Hacking has been around for decades. Steve Wozniak and Steve Jobs first hacked into the Bell Telephone System in the mid-70s with the famous “blue box” to place (i.e. steal) free long-distance calls. In the mid-1980s, a college student protesting nuclear weapons released a computer virus that took down NASA and Department of Energy computers. And in 1999, Hacktivismo, an international cadre of hackers, created software to circumvent government online censorship controls, as they believe freedom of information is a basic human right. Since then, the rapid proliferation of online groups able to shut down individual, corporate and even government computers has become a focus for the FBI and other agencies concerned about this trend.

Hacktivism made headlines in 2010 when the group Anonymous reacted to events arising from the arrest of WikiLeaks leader Julian Assange. Assange’s detainment, which coincided with the WikiLeaks release of classified information hacked from U.S. intelligence channels, had supporters outraged. Feelings escalated when the website recruiting donations for his defense was left reeling by the refusal of MasterCard, Visa and PayPal to handle donations earmarked for Assange’s aid fund. Anonymous fought back by hacking into and disrupting the websites of all three financial companies, causing service outages and millions of dollars in damage.

Anonymous achieved its goal by mounting a distributed-denial-of-service (DDoS) campaign. Interested parties could join the Anonymous coalition by direct participation or by downloading a tool that allowed their computer to be controlled by Anonymous operatives. Dr. Jose Nazario, a network researcher with Arbor Networks, claims that it takes as few as 120 computers linked together in this way to bring down a large corporation’s web presence. Anonymous insists this technique is not hacking; it is simply overloading a website by flooding it with traffic that makes it impossible to load pages for legitimate visitors. According to Dylan K., an Anonymous representative: “Instead of a group of people standing outside a building to occupy the area, they are having their computer occupy a website to slow (or deny) service of that particular website for a short time.” But this is not equivalent to a real-world protest: hacktivists don’t need the voices of thousands for their protest to be effective. Less than 120 computers would suffice to take down an entity—something 120 people on the sidewalk could not manage.

The FBI soon unearthed the identities of some of the hacktivists involved in various Anonymous hits. One, Fidel Salinas, was charged first with simple computer fraud and abuse. By the end of seven months, there were 44 counts of felony hacking looming over him for his part in disrupting government servers. Salinas claims the escalating charges were due to the FBI increasing pressure on him to turn informant. This kind of “encouragement” is nothing new. Cybercriminal and Anonymous hacker Hector Xavier Monsegur, under the internet alias “Sabu,” initiated the high-profile attacks on MasterCard and PayPal in response to the Assange arrest. By 2012, Monsegur had been arrested and was busy working in concert with the FBI to unearth the identities of other Anonymous members, who were then prosecuted under the CFAA.

The Electronic Frontier Foundation (EFF) which, according to its website, is “the leading nonprofit organization defending civil liberties in the digital world,” is promoting reform of the CFAA through consumer education, petitions and other legal means. One of their central arguments for CFAA reform concerns the treatment of hacktivist Aaron Swartz, who downloaded millions of scholarly journals from the JSTOR database, a subscription-only service, through MIT’s campus network. Swartz’s actions were predicated on his belief that publicly-funded scientific literature should be freely accessible to the taxpayers who paid for it. After his arrest, federal prosecutors charged him with two counts of wire fraud and 11 violations of the CFAA, amounting to up to 35 years in prison and over $1 million in fines. Swartz committed suicide a few days after declining a plea bargain that would have reduced the time served to six months in a federal prison. The EFF explains that if his act of political activism had taken place in the physical world, he would have only faced penalties “…akin to trespassing as part of a political protest. Because he used a computer, he instead faced long-term incarceration.” However, the EFF seems to gloss over the fact that, no matter how pure his reasoning, when Aaron Swartz played Robin Hood he wasn’t merely trespassing — he was stealing.

In response to Swartz’s untimely death, the EFF suggested changes in the way the CFAA calculates penalties, seeking refinement of overly broad terms and arbitrary fines. Its emphasis is on the punishment fitting the crime, and its hope is to align the CFAA’s penalty recommendations more closely with those received for the same acts when they arise during a physical political protest. The EFF is currently working on a full reform proposal that they hope will restrict the CFAA’s ability to criminalize contract violators or technology innovators while still deterring malicious criminals.

It’s true that the CFAA is too broad and may allow prosecutors to apply draconian charges for misdemeanor crimes, but the EFF is not taking into consideration the real harm done by hacktivist “protests.” A physical political protest is most often a permitted, police-monitored event. It may cause temporary (a few hours or days at most) disruption of business; garner media attention; and alert the public to the seriousness of the issue. The online protests staged by “Operation Payback”, Anonymous, and most recently, Team Impact, the Ashley Madison hacker(s), resulted in far more damage and disruption to the targeted organizations than would a “real world” protest. These acts are more akin to vigilantism or even terrorism since the hacktivists rely on intimidation in pursuit of self-defined injustice—and outcomes often involve harm to innocent people. If a physical protest had resulted in the same outcome—a company looted, lives destroyed and money lost—it would be considered a criminal act.

Hacktivists seem hardened against the collateral damage they inflict in achieving their goals, arguing that the end justifies the means. The recent Ashley Madison scandal is a great example of hacktivism without conscience. Hackers calling themselves Team Impact threatened Avid Life Media, Inc., the parent company of infidelity website Ashley Madison, to release information regarding their customers if they didn’t shut down the site. They believed that Ashley Madison was faking most of the female profiles available on the site to scam more men into signing up. When the company continued operating, Team Impact released the data, potentially ruining marriages, destroying careers, and compromising the personal data of users who now face threats of blackmail and identity theft. The company itself is facing $500 million in lawsuits, but the toll on its customers—the very people that Team Impact was claiming to help—was heavy indeed.

Similarly, Anonymous’ hacking of the PayPal website alone cost that company $5.5 million in revenue and damaged numerous small businesses and individuals who were unable to complete financial transactions during the shut-down.

Hacktivists claim their actions are equivalent to real-world protests and as such, should be protected from criminalization. It’s true that citizens’ right to peaceful public assembly is protected by the United States’ Constitution’s First Amendment and further guaranteed by the Supreme Court. However, the law is clear that the government can put restrictions on the manner, time and place of such a gathering to preserve order and safety.

The First Amendment does not guarantee the right to assemble when there is the danger of riot, interference with traffic, disorder, or any other threat to public safety or order. One group’s right to speak out should not conflict with rights of other individuals to live and work safely. This should be true online as well as in the physical world, but hacktivists often act outside of this stricture. Mikko Hypponen, chief research officer for F-Secure, sums it up well: “The generation that grew up with the Internet seems to think it’s as natural to show their opinion by launching online attacks as for us it would have been to go out on the streets and do a demonstration. The difference is, online attacks are illegal while public demonstrations are not. But these kids don’t seem to care.”

Online groups should not be allowed to achieve their desired results using extortion, intimidation, terror or vigilantism. But it is equally important that governments and corporations not have the right to sway, direct, or otherwise channel the free will of the people toward or away from any one purpose by using force or fear of penalty. And setting laws in place that make non-violent, non-damaging civil disobedience a major infraction of the law is tantamount to muzzling free speech. Gabriella Coleman, Assistant Professor of Media, Culture and Communication at New York University writes that if DDoS attacks by hacktivists are always deemed unacceptable, that this would be “damaging to the overall political culture of the internet, which must allow for a diversity of tactics, including mass action, direct action, and peaceful of (sic) protests, if it is going to be a medium for democratic action and life.”

Both sides are wrong to some extent. The problem with internet hacktivists is the veil of anonymity behind which they hide. Real-world political protests require that people stand up for what they believe—physically. They put their faces out there, sign their names on petitions and take responsibility for their views. The Supreme Court has ruled that anonymous speech deserves protection, but hacktivism is not speech—it is action. Hacktivists can intimidate and extort individuals, corporations, and governments without having the courage to step forward. Sometimes, people will take actions anonymously that they would not under scrutiny, a truism that makes the groups like Anonymous capable of causing chaos on a worldwide scale.

There can and should be many ways to speak your mind and promote your political agenda online, and you should be able to do so without fear of reprisal from law enforcement. However, intentional damage inflicted by anonymous disruptive mass action can also hurt unrelated innocent individuals. With our society’s level of reliance on internet services for business and daily living, hacktivist activity has potentially far-reaching consequences. Shutting down banking or payment capabilities doesn’t just hurt the targeted banks and credit card companies; it prevents many small businesses and individuals from conducting necessary business and impacts their daily lives in a negative way. Releasing the personal data of subscribers or customers to harm a government or company doesn’t just hurt the target—it sets thousands, sometimes millions, of lives on edge.

And let’s face it: Breaking into a store in a “real world” protest, stealing its customer lists or proprietary data and either disseminating it or destroying it is not trespassing. It’s not a misdemeanor. It’s not peaceful. It’s theft at best and terrorism at worst.

Online activists should mount an up-front, highly publicized, web-based boycott of their opponent—peacefully and legally—to exercise their freedom of public redress in the way in which the Constitution intended. Team Impact could have constructed a viral message letting people know that Ashley Madison was scamming them and easily made their point without the collateral damage. And governments who are interested in keeping discourse alive need to take a step back from the edge of fascism by narrowing their definition on “unauthorized use” of computers to prevent minor instances of online civil disobedience from being classified as criminal offenses.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Crowdsourcing and Campaign Reform

 

With the 2016 presidential elections now on the horizon, there’s no escaping ‘the campaign’—whomever’s that might be. The proliferation of political advertising across all media, from billboards to iPhones, subjects the voting public to a constant barrage of targeted advertising. The strongest campaigns, in an attempt to master cross-platform messaging, have been making innovative strides in marketing and media, which certainly adds a sense of novelty to an otherwise staid political contest. One example is the gradual introduction of digital crowdsourcing into the political process. On one hand, crowdsourcing, as a novelty, is an essentially democratic notion, though beyond the surface, it is still exclusionary by nature. As many see it—including attorney, activist, and Harvard Law Professor Lawrence Lessig—the tricks of the contest may be new, but the game is the same…and it’s only getting more expensive to play.

As a primary indicator of this fracture, Lessig, in a recent article on Medium, points to the issue of money in politics, which is featured prominently in debates and campaign promises, but rapidly fades to the background in office. Indeed, there is a general consensus that money—and associated political and corporate corruption—should be at the top of the president’s priority list; second only to “creating good jobs,” as Lessig cites from this 2012 Gallup poll. But in practice, an official taking on finance reform is setting herself on a road towards destruction.

“To take on the influence of money is not to take on one party, but both parties,” Lessig writes. “The enemy of Congress is a failed president. […] So on the long list of promises that every normal president enters office with, the promise of reform always falls to the bottom.”

Broken promises of campaign finance reform may not be the explicit fault of a hypocritical chief executive, but rather the attrition caused by the mechanics of American politics in general. This leads us to Lessig’s central claim: “That on the issue of fundamental reform, an ordinary president may not be able to lead.” Instead, Lessig proposes the idea of a trustee president, which Lessig defines as prominent, “well-liked leader” who declares her presidential candidacy on a single issue. After this issue is resolved, the trustee president would step down and hand over the reigns to the VP for the remainder of the term. For finance reform, this trustee president “would use every power of the executive to get Congress to enact fundamental reform”—and then move on.

The idea of a trustee president is a unique proposition—perhaps even a ‘hack’ of the political system. “Our democracy will not heal itself,” Lessig writes. “Reform will not come from the inside alone. It needs a push.” Which is why, this past August, Lessig announced that if he raised $1 million by Labor Day, he would run for president on this exact model, in order to pass what his team has dubbed the Citizen Equality Act of 2017. As of Sept. 6, with money raised from 10,000 unique donations, Lessig is now attempting to run on the Democratic ticket (though according to his recent op-ed at Politico, the Democratic party isn’t being very receptive). The drama of Lessig’s thought-experiment-turned-real-experiment will be interesting enough to follow in the coming months, but what’s unique about his proposed Citizen Equality Act is that, in addition to being modeled on existing reform proposals, his will also “crowdsource a process to complete the details of this reform and draft legislation by the start of 2016” (emphasis added).

Crowdsourcing is a digital-age concept and term, though the notion of crowdsourcing in politics is a fundamentally democratic idea. However, that crowdsourcing is a digital concept keeps its referents firmly grounded in the present, as Tanja Aitamurto suggests in his 2012 book, Crowdsourcing for Democracy. He defines crowdsourcing as “an open call for anybody to participate in a task open online, where ‘the crowd’ refers to an undefined group of people who participate.” This is in contract to outsourcing, where a task is specifically assigned to a defined agent. Popular uses for crowdsourcing range from funding product or project development, as with sites like Kickstarter and IndieGoGo, to more refined applications, including urban planning, product design, mapping, species studies, and even solving complex technical or scientific issues. It’s only a small jump for crowdsourcing to be used for policy and reform, especially in democratic contexts, as Aitamurto demonstrates through various international case studies, including constitution reform in Iceland, budget preparation in Chicago, and the White House’s We The People petition system.

Aitamurto writes: “When policy-making processes are opened, information about policy-making flows out to citizens. […] Opening the political process holds the potential to increase legitimacy of politics, and increasing transparency can strengthen the credibility of policy-making.” In an ideal or direct democracy, especially in a modern context, crowdsourcing just makes sense, especially as a tool for mass communications and encouraging public participation. Our increased reliance on technology for economic participation, communication and citizenship casts crowdsourcing as a natural outgrowth of our cultural evolution, and in this way, it also just makes sense that crowdsourcing would be applied to politics. But ethically, crowdsourcing’s promise of cross-platform policy participation is not so equitable, especially when we begin to account for income and literacy as prerequisites for entry into the digital crowdsourcing process.

A few examples of how crowdsourcing is exclusionary, despite its ideal democratic applications, are the person without a smartphone, the family without an Internet connection, or the digitally illiterate citizen (i.e., the person who has never sent an email or used a computer). You can’t help crowdsource if you’re not part of the crowd; that is, if you’re on the wrong side of the digital divide. Of course, even before the digital age, income and literacy have long been limiting factors for democratic participation; they’ve simply found new media for the modern age. On this point, Aitamuro clarifies: “Crowdsourcing […] is not representative democracy and is not equivalent to national referendum. The participants’ opinions, most likely, do not represent the majority’s opinion.”

The natural limitations of the crowdsourcing process, as Aitamuro suggests, can be read as a downside of crowdsourcing in a democratic context, but if that democracy is broken—as Lessig and many others say it is—a crowdsourced tactic, despite its ethical complications, might be exactly the sort of push a broken democracy needs. But in order for it to really work, everyone needs to agree to the plan, and trust that its leaders will deliver on their promises. A task like that will take a lot of work, supercharged rhetorical finesse, and a massive amount of popular traction that Lessig currently lacks; a Suffolk University/USA Today poll from Oct. 1 lists a mere 0.47 percent of respondents stating support for Lessig. His plan makes sense, and the notion of entrusting the trustee with a crowdsourced reform is a novel reflection of idealized democratic values, but novelty in the political process is a mixed bag; especially in the midst of a pay-to-play political paradigm where not even Lessig could get a foot in the door without a cool million. In this way, crowdsourcing, despite being a novel digital-age concept, might simply be more of the same.

 

Benjamin van Loon

Benjamin van Loon is a writer, researcher, and communications professional living in Chicago, IL. He holds a master’s degree in communications and media from Northeastern Illinois University and bachelors degrees in English and philosophy from North Park University. Follow him on Twitter @benvanloon and view more of his work at www.benvanloon.com.

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

 

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World.

by Bruce Schneier

Review by: Owen King

We constantly interact with computers.

Computers generate data.

Data is surveillance.

Surveillance curtails privacy.

Privacy is a moral right.

This combination spells trouble, and we better think about how to deal with it. This, in essence, is the message of Bruce Schneier’s book, "Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World", published earlier this year.

Schneier is a well-known expert on computer security and cryptography. But "Data and Goliath" is not a technical book. Instead, it presents an analysis of the social and ethical implications of new information technology, and unabashedly offers prescriptions for reform. Though Schneier’s analysis is both striking and informative, his prescriptions and justifications are less compelling. Nonetheless, even these latter elements serve Schneier’s overarching purpose of advancing the conversation about the prevalence of surveillance and the role of big data in our lives.

"Data and Goliath" has three parts. Part One, “The World We’re Creating,” is about the state of information technology and how governments and other large organizations use it to monitor us. Part Two, “What’s at Stake,” articulates values, including liberty, privacy, fairness, and security, and argues that the technology described in Part One threatens each of these. In response to these concerns, in Part Three, “What to do About It,” Schneier lays out his prescriptions for governments, corporations, and individuals.

Part One is the most successful of the three. Schneier depicts current technology and its capabilities in a way that should impress any reader, even those who understand the technology only superficially. The crucial idea is that the massive data sets involved in what is now known as “big data” are the quite natural, perhaps inevitable, result of computation, and these data sets constitute a practically unlimited source of personal information about us. This data is generated as we interact with electronic devices that are now completely ordinary parts of our lives: our phones, our televisions, our cars, our homes, and, of course, our general-purpose computers. This occurs with every tap on a mobile phone app, every web page request, and so on; all these interactions create data. Hence, Schneier urges us to think of data as a by-product of computing.

Regarding the potential usefulness of big data, Schneier remarks, “That’s the basic promise of big data: save everything you can, and someday you’ll be able to figure out some use for it all.” Combining this remark with the point about data as a by-product of computing, we might formulate an especially informative definition of big data as follows: Big data encompasses the large, continually growing data sets that are a by-product of computation, along with the methods and tools for collecting, storing, and analyzing them, such that the data can be used for purposes beyond those that guided its collection. This conception, implicit in Schneier’s discussion, answers both the why? and the for what? of big data. In contrast, the standard definitions of big data put in terms of “Three V’s” or “Four V’s” or however many V’s merely list the general characteristics and uses of big data systems.

This puts us in a position to see why big data is such a big deal. The data constantly rolls in, and with some creativity, along with the possibility of joining data sets from different sources, it can shed light on the minutest details of our lives. (Schneier provides many clear examples of this.) The attractions for intelligence agencies and corporate marketing departments are obvious, and we are increasingly living with the results. Over the six chapters in Part One, Schneier describes the developments of the consequent big data revolution in gruesome and captivating detail. Most readers will come away convinced that their lives are far less private than they thought. Reading these chapters would be worthwhile for anyone with even minimal curiosity about the political, economic, and social effects of technology.

Part One is likely to induce at least a vague feeling of fright or anxiety in most readers. Part Two attempts to justify this uneasiness by explicitly appealing to ethical principles and values that big data seems to threaten. A highlight is the chapter called “Political Liberty and Justice,” which emphasizes the so-called “chilling effects” due to ubiquitous surveillance. Schneier compellingly explains how constant surveillance may dissuade us from engaging in some of the morally permissible and, in many cases, legal activities we would otherwise choose. Surveillance thus inhibits us, effectively reducing our liberty. As Schneier recognizes, this was the idea behind philosopher Jeremy Bentham’s famous panopticon—a prison designed to ensure compliance and conformity through (at least the appearance of) constant surveillance. Other useful observations come in the chapter on “Commercial Fairness and Equality,” in which Schneier points out ways in which surveillance through big data facilitates discrimination against individuals or groups.

Unfortunately, the weakest chapters of Part Two are those on which the most depends—viz., the chapters respectively entitled “Privacy” and “Security.” In order to justify his eventual prescriptions for limiting the collection and use of big data, it is crucial for Schneier to show that current big data policies are incompatible with a valuable sort of privacy, and furthermore that the losses in privacy due to big data are not outweighed by the increased security it helps provide. Schneier’s book falls short of a convincing case for either of these claims.

Schneier’s treatment of privacy is provocative, but it will likely be unconvincing to anyone not already on his side. Schneier’s view is that surveillance constituted by massive data collection—regardless of how the data is eventually used—is a serious privacy violation and, hence, constitutes harm. But Schneier does not theorize privacy in enough depth for us to see why we should agree.

Unsurprisingly, there already exists an extensive literature—spanning law, philosophy, and public policy—on privacy and information technology. Though much of that literature is not informed by the level of technical knowledge Schneier possesses, it offers some theoretical nuance that would have helped Schneier’s case against surveillance. If bulk data collection by computers is indeed a privacy violation, it is quite different from, say, an acquaintance listening in on your phone calls. Some state-of-the-art work on privacy, which dispenses with the public/private dichotomy as a tool of analysis, would put Schneier in a better position to address this. For instance, Helen Nissenbaum’s theory of privacy as contextual integrity understands privacy concerns in terms of violations of the various norms that govern the diverse social spheres of our lives. Such a theory may provide resources to better distinguish surveillance that is genuinely worrisome from more benign varieties. Schneier pays lip service to Nissenbaum’s idea that privacy concerns depend on context, but this acknowledgment is not reflected in his scantily justified, though adamant, insistence that surveillance through massive data collection is itself a violation of human rights.

Regardless, technologists, philosophers, and other thinkers, all should put more thought into the open question of how our concern for privacy bears on massive data collection practices. It is no criticism of Schneier to say that he has not resolved this issue. However, without more headway here, some of Schneier’s policy prescriptions are less than convincing.

Like his treatment of privacy, Schneier’s discussion of security feels underdeveloped. Schneier’s central claims are that privacy and security are not in tension, and that mass surveillance does little to improve our security. On the former point, he makes some interesting observations about ways in which privacy and security can be mutually reinforcing. Also convincing is his explanation of why designing computer systems to allow surveillance makes them less secure. Furthermore, Schneier nicely lays out a case for the increasingly accepted claim that mass surveillance is not very effective in predicting acts of terrorism. But these points do not suffice to show that pitting privacy concerns against security concerns imposes “a false trade-off.”

Now, Schneier is indeed right to argue that predicting acts of terrorism with enough precision that we can stop them before they occur is nearly impossible. Schneier cites three factors to account for this: First, predictions culled from mining big data have a high rate of error. Second, acts of terrorism do not tend to fit neat patterns. And, third, terrorists actively attempt to avoid detection. We can accept this and still wonder: What about the would-be terrorist who wants to hurt people but also wants to avoid being caught? The more data we have collected and stored, the harder it is for anyone to do anything without leaving a digital trail. Thus, big data enhances our forensic capacities. And, because of this, mass surveillance may have the effect of deterring would-be terrorists. Furthermore, in the event of a terrorist act, a large intelligence database makes it easier to discover any infrastructure—whether technological or social—that the terrorists left behind, and finding this may help prevent future attacks. Whether these benefits are enough to justify the mind-bogglingly extensive intelligence programs revealed by the Snowden documents is a further question. The present point is simply that Schneier has not addressed all of the security-related reasons that might lead one to favor mass surveillance, at the expense of some kinds of privacy.

Arriving at Part Three, we find a laundry list of proposals for reform, all consonant with the ethical outlook espoused in Part Two. And it does read more like a list than like a unified platform. Most of the proposals receive only a page or two of discussion, which is not enough to make convincing cases for any of them. But that is not to deny the value of this part of the book. Like-minded readers (and even many dissenters) will peruse Schneier’s prescriptions with interest, finding in them possibilities worthy of more thorough scrutiny, development, and discussion. This may be just what Schneier intends; those are the discussions he hopes we will be having more often.

Schneier’s list of proposals includes reorganizing the U.S. government’s intelligence agencies and redefining their missions, increasing corporate liability for breaches of client data, creating a class of corporate information fiduciaries, and encouraging more widespread use of various privacy-enhancing technologies, especially encryption. This is only fraction of the list, and, again, many of the ideas deserve to be taken seriously.

Schneier’s proposals prompt us to think in concrete ways about the costs and benefits of big data in the present and for the future. Big data promises huge gains in knowledge, but sometimes at the expense of a sort of privacy Schneier considers indispensable. Despite the many trade-offs, at least one of Schneier’s proposals ought to be a fairly easy sell for most people. That is the push for more widespread use of encryption.

Nothing about the use of encryption inherently precludes the existence of the continuously accreting data sets that characterize big data. If I am shopping on Amazon’s website over an encrypted connection, Amazon can still collect data about every product my pointer hovers over and every page I scroll through. It is just that third parties cannot see this (unless they are granted access). Such encryption is, of course, already standard for web commerce. But, in principle, all of our digital communication—every text, email, search, ad, picture, or video—could be encrypted. Then, at least ideally, only those whom we allow could collect our data. Thus, we gain some degree of control over how we are “surveilled without severely quelling big data and its benefits. Of course, this would make it much harder for intelligence agencies to keep tabs on us, which is why some government leaders wish to limit encryption.

Overall, the strength of just Part One of "Data and Goliath" is enough to make this book worthy of an emphatic recommendation; it offers a stunningly rich understanding of the possible applications of big data and a visceral sense of some of its dangers. The other two parts are also stimulating, and they provide a helpful starting point for responding to the issues raised in Part One. The appeal of the book is broadened further by the extensive notes, which make it a valuable resource for academic researchers.

Of course, books on technology and society rarely stay relevant for long, since the technology advances so quickly. In spite of this, due to its detailed exposition and the pointed way it frames choices about our relationship to big data, Data and Goliath should be quite influential for the foreseeable future.

 

Owen King

Owen King is the NEWEL Postdoctoral Researcher in Ethics, Well-Being, and Data Science in the Department of Philosophy at the University of Twente. His research is primarily focused on well-being, from both theoretical and practical perspectives.  He also investigates ethical issues raised by new computing and data technologies.

Digital Kidnapping—A New Kind of Identity Theft

 

It’s no surprise that new threats to personal security and privacy crop up as online communities change and grow. We’ve known about "sharenting" for a while—the tendency of parents to share every milestone in their child’s life online makes lots of personal information about children readily available to people looking to nab a new identity. But now there’s a new game in town: digital kidnapping. Digital kidnappers take screen shots of pictures posted on social media, blogs, and other online sites and use them for various activities, the most prevalent of which is online role-playing. Online role-playing has been around for decades, but only recently has it sparked outrage when a subgroup of this community, baby role-players, began stealing and repurposing online photos for their game.

Some members of the baby role-playing community are snapping up images of children on photo-sharing sites such as Instagram, Flickr, Facebook as well as various blogs to use as avatars or virtual children in their game. Players either pretend to be the child or claim the baby as their own and assign friends and other players to act as online family members to the child. There are even virtual adoption agencies where a role-player can request a youngster with a distinct look, which the “agency” seeks to fill by finding a matching image online. Participants search at #babyrp to find new “babies” for adoption or get chosen as a family member.

Psychologists theorize that many of these players are teens and tweens from less-than-optimal home situations who are fantasizing about having the perfect family. When interviewed by Fox News, child psychologist Dr. Jephtha Tausig-Edwards explains why these children are acting out these fantasies online: "They're going to do this maybe because they're bored, they're going to do this maybe because maybe they want some attention," Tausig-Edwards said. "They're going to do this because perhaps they really are a little envious and they would like that beautiful child to be their own.”

Other psychologists, like Dr. Justin D'Arienzo, admit that there are darker reasons why someone might be interested in these types of pictures. The internet has become a haven for fetishists and others who practice socially deviant behaviors, including those that require children or some element of childhood for their personal fulfillment or sexual gratification. And, although the children themselves are not being exploited—the photos in many cases have been recontextualized to play out a dark or abusive fantasy. For example, a recent thread of comments on an Instagram post regarding a baby boy has one commenter asking if he or she “can have a private with a dirty baby.”

However, one of the most recent cases of digital kidnapping didn’t involve role-playing in game form. Instead, an adult male from New York, Ramon Figueroa, stole online photos of a 4-year-old girl from Dallas and posted them on his Facebook page, claiming she was his daughter. He posted numerous pictures of the little girl, with the action in each shot lovingly described by the doting “father.” Some of the captions he wrote under the pictures of the little girl were, “Girl version of me,” and “This is how she looks in the morning…she said daddy stop (taking pictures).” After being contacted by the girl’s mother about his use of the photos, he promptly blocked her from seeing his page.

Unfortunately, there is currently no law against pretending someone is related to you. This little girl’s mother had only one option: To file a complaint with Facebook, which met with little success initially. Dismayingly, Facebook merely confirmed that Mr. Figueroa’s profile met their standards and, as such, there was nothing that could be done about the pictures if he didn’t voluntarily take them down. However, after being contacted by news media, Facebook recanted and agreed to remove posts of this nature as parents report them.

Shockingly, this type of photo theft doesn’t seem to violate the policies of most social sharing sites. Instagram (now owned by Facebook) explicitly states in their policy that when you post content through their system: “…This means that other Users may search for, see, use, or share any of your User Content that you make publicly available through the Service, consistent with the terms and conditions of this Privacy Policy and our Terms of Use (which can be found at http://instagram.com/about/legal/terms/).” So parents, beware: Whatever you post online through these channels can be reposted at will by anyone who can see it.

In response to the laissez-faire attitude of social media websites regarding these stolen photos, concerned parents got together and launched at change.org. The hope was to force Instagram to close down all baby role-playing accounts, but either due to lack of publicity or lack of interest, it garnered only 1,047 signatures. Of course, taking down the #babyrp account won’t do much to curb other types of digital kidnapping that are cropping up worldwide. A recent investigation by Scotland’s Sunday Post uncovered numerous instances of online photo theft. Over 570 selfies of Scottish girls; more than 700 selfies from girls in Northern Ireland; and thousands from young girls around the U.K. had been stolen and uploaded to a porn site. The girls were often in their school uniforms, but there were some instances where skin or underwear was showing. When confronted, the representative for the website denied that the images existed and because the site was out of the country, there wasn’t anything further that could be done.

Another young British girl had her personal images stolen from her social media account and posted on a website that offered “hot horny singles in your local area.” When her photo popped up on a sidebar advertisement for the sex site on a friend’s computer, he called to let her know that her pictures were being used. She has since updated her Facebook privacy settings in the hopes of preventing future occurrences.

Until this issue gets more attention from legislators and stricter privacy regulations are implemented, you are the first line of defense against this kind of identity theft. Fortunately, there are things you can do to protect yourself or your loved ones from digital kidnapping.

The first and most failsafe option is to stop posting pictures online. However, if you do choose to share them, you should monitor and adjust your privacy settings so that only people you know have access. Alternately, you can choose a privacy app, such as Kidslink, that allows parents to determine who sees their photos across social media programs. For those who refuse to curtail their online sharing, there are also apps that will watermark your images to deter would-be photo borrowers. Another protective action critical for people who wish to continue unrestricted photo posting is to turn off the geolocation option on images so that they will not reveal the real-world location of your child.

Finally, if you’ve previously posted pictures without putting privacy protections in place and you’d like to see if any of them are being used without your permission, do a reverse image search on your photos. You can use a site like TinEye that offers this service for free, or you can drag an image from your computer into the search box on Google Chrome or Firefox to reverse search. You can also go to images.google.com, drag an image into the search bar and press enter. Any website on which the image appears will come up in the search results, as well as visually similar images.

Ways to steal personal information are quickly outpacing protective measures granted to internet users through general legislation or attempts at self-governance by internet entities, such as Facebook and Twitter. The deficiency of guidelines regarding the acquisition of posted photos leaves the onus of providing identity protection, particularly to minors, firmly in the hands of parents. Parents should take the time to fully understand and consider all of the ramifications of posting photos online, including reading and comprehending the privacy policies of each online forum they use. While setting up appropriate safeguards is important, it is also critical to police the distribution of the photos and information around the internet through reverse image searches so images acquired and used without permission are found quickly. The earlier a child’s photo is removed from an unknown site, the more protection that child is afforded from repercussions in the offline world.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

The Ethics of Video Streaming Apps

 

Syncing video files with iTunes can be difficult and inconvenient for consumers. To respond to the need, several apps have come to the forefront for effective video streaming. Of course, Netflix and Hulu Plus are leaders in the market, and UStream is one of the biggest live-streaming apps. Air PlayIt, AirVideo and Crackle are a few options that don’t require syncing, and video can be played offline. Plex, Slingbox, VLC Streamer and SnagFilms are additional options previously named by Mashable as top choices for watching video, with varying levels of success.

Easy Access

According to DaCast, a web based video streaming platform, by the end of 2015, there will be 2 billion smartphone users worldwide. In other words, a quarter of the entire human population on earth will have a way to be internet-connected at any time from anywhere. In 2014, a shocking 53 percent of all video was viewed on mobile devices and that number is expected to grow to 69 percent by 2018.

DaCast Tech writer Phillene Managuelod feels video streaming is becoming more mainstream. “Smartphones have also become mini-TVs with larger and more high-definition screens,” she says Phillene Manageulod at DaCast.com. “People will be on the train watching their TV shows or they will be following the live stream of the Giants vs. Dodgers. People are now more keen to watch what they want when they want it. It’s the same thing for broadcasters – they can now broadcast video content from wherever they are through their smartphones.”

Piracy and Privacy

Although many services, like Netflix, Hulu Plus and Amazon Instant Video are paid with monthly fees, the majority of streamed video is free. Often referred to as “shareware,” it can be difficult for viewers to even know if they are breaking a law when watching a movie or a sports event. With all the available accessibility for people to stream video on demand, what are the implications for digital ethics?

Piracy has become a major issue. Recently, Twitter’s streaming app Periscope enabled users to stream a boxing match between Floyd Mayweather and Manny Pacquiao without paying the $100-per-view fee. HBO and Showtime had asked Periscope, Meerkat and other live-streaming apps to disable viewing during the fight, but it’s hard to prevent Periscope users who have paid for the fight from streaming it to their followers. Major sports leagues are trying to tackle the concept of how to handle the fact that thousands of stadium visitors can broadcast games live from their phones. Clearly, it would be nearly impossible for the FBI to pursue or prosecute the millions of illegal views of the fight or other protected live content.

Privacy issues in live streaming are a concern of many tech users. While the FBI has the ability to use technology to flush out video streaming habits of law-breakers, it’s generally known to do so with child porn violators. According to Wired, the FBI has been quietly experimenting for several years with “drive-by hacks” of the powerful Tor anonymity system (free software used to redirect Internet traffic to conceal a user’s identity) to solve one of law enforcement’s most prevalent Internet problems: identification and prosecution of criminal websites.

The FBI’s use of anonymity software to hack websites creates controversy because of the potential for innocent parties to wind up infected with government malware due to an inadvertent website visit. American Civil Liberties Union technologist Chris Soghoian, an expert on law enforcement’s use of hacking tools, told Wiredthat there should have been congressional hearings on the government’s policy of monitoring streaming: “If Congress decides this is a technique that’s perfectly appropriate, maybe that’s OK. But let’s have an informed debate about it.”

Blurred Lines of the Law

What’s legal and not legal about streaming? The lines can be blurry.

Downloads of even partial files — known as "pseudo-streaming" — are also copies of copyrighted material, which is illegal. Streaming content as a "public performance" (viewed by a significant number of people besides family and friends) is also a copyright violation.

Watching content like TV shows, movies and sporting events is commonly done regardless of the legality. Sites that host videos generally create layers of links to hide their identities from the FBI. These secondary online streaming sites often disguise themselves as search engines for content because liability requires the "inducement rule" in order for them to be accused of copyright infringement. Inducement rule refers to a test created in a 2005 Supreme Court ruling that states a company or website can only be held accountable for distributing unlicensed content if it clearly encourages users to infringe a copyright.

Business Insider recently explored the issue of live broadcasting video to Meerkat or Periscope, noting:

“Live-streaming video has suddenly gotten easier than ever before—and as is the case every time social media takes a leap forward, a host of practical and ethical questions about using technology during times of tragedy have presented themselves.”

Is It Wrong to Broadcast Tragedy?

So what about the ethics of broadcasting video? If everyone with a smartphone can broadcast live to millions of viewers, how does that change journalism and privacy issues? Meerkat and Periscope users broadcasted a partial building collapse in New York that caused the deaths of three people recently. This instant-streaming technology essentially makes it possible for people to film the deaths of other people, broadcast them live, and potentially have family members learn of the death of their loved one on Twitter.

At Fast Company, associate news editor Rose Pastore participated in watching and streaming the recent New York building collapse in which 19 people were injured. Although she reported that “some people on Twitter took issue with the use of live-streaming apps during the event, arguing that it was distasteful or unethical to do so when people's lives were still in danger,” she also noted, “it is human nature to want to share these kinds of stories…it is not a bad thing that people are interested in what is happening to strangers across the world.”

Several news outlets, including CNN, ran live streaming coverage of the recent tragic earthquakes in Nepal. But what if live-streaming video apps had been available on 9/11? News coverage depicting people jumping from the World Trade Center buildings was controversial enough. Video replays of the tragic images haunt us year after year on the anniversary of America’s darkest day. Imagine if hundreds of live feeds of burning and collapsing buildings and of catastrophic human loss had also been running live on social media.

Who’s Tuning In?

Periscope and Meerkat are only a few months old. In the large scheme of things, they don’t have that many users yet. (Daily Dot snarks: “There are games you’ve never heard of, services you’ve never used, and apps dedicated to nothing but producing fart noises that have more downloads and users than Meerkat and Periscope do.”)

Daily Dot asks:

“These apps feel like big things, like they matter. And there’s no question that Meerkat and Periscope will serve important purposes going forward. Even though live streaming isn’t new, it’s never been this easy. But does anyone outside of the obsessive tech community truly even care?”

However, the implications of streaming live video moving forward are interesting at a minimum, particularly in regards to ethics. With recent police brutality incidents in the headlines and the fact that the average American is caught on camera 75 times a day, there’s no doubt our society is moving toward an Orwellian, “Truman Show”-esque world. A world, in which people both expect to appear on camera, and in many cases, be the ones holding the cameras.

 

Mary T McCarthy

Mary McCarthy is Senior Editor at SpliceToday.com and the creator of pajamasandcoffee.com. She has been a professional writer for over 20 years for newspapers, magazines, and the Internet. She teaches classes at The Writer’s Center in Bethesda, Maryland and guest-lectures at the University of Maryland’s Philip Merrill College of Journalism. Her first novel The Scarlet Letter Society debuted this year and her second novel releases in 2015.

When Sharing Isn’t Caring

 

Are you one of those parents who posts an endless stream of toddler meltdown photos, blogs about your teen’s milestones or tweets endlessly about the daily minutiae of your children’s lives? If so, you’re not alone. A recent study of U.S. parents noted that about two-thirds of mothers use Facebook and, of these, 97 percent have posted pictures of their children. Around 89 percent have posted status updates regarding children, and a whopping 43 percent posted videos. Around half of these parents have even shared more personal details such as special occasions or family trip information. It’s apparent that technology is making an impact on the way parents share information about their children. Today’s generation of parents is made up of people who’ve grown up with technology and are at ease sharing information in an online community. But we have yet to see the full impact this sharing has on the children whose young lives are being played out on an international medium—the internet.

Children no longer enjoy the protections of an anonymous childhood. The “seen and not heard” child of yesteryear has morphed into a modern celebrity, with 90 percent of American children possessing an online history by the time they turn two. Besides the social ramifications of growing up immersed in the internet, there is another, darker issue lurking beneath the surface of seemingly innocuous online sharing. Every bit of data about your child contributes to a profile of him or her that can have long-term consequences for that child as an adult. Big data companies collect information regularly and glean what they can from social media postings, blogs, and other online venues to get a picture of a child’s likes, dislikes, habits and more. As these children grow older and become willing participants in their online biography, the data that companies can access becomes more refined. By age five, half of these children are regular users of computers or tablets and by the time they’re eight, most children have cell phones and have plugged into the world of video games. Because they’ve grown up seeing parents share intimate details of their personal lives on Facebook, Twitter, Instagram and blogs, they have no reason to think in terms of protecting their information. Their parents have modeled for them a cavalier attitude toward internet privacy.

And a cavalier attitude is a dangerous thing to have online. A Wall Street Journal (WSJ) study took a look at fifty popular children’s websites and found they have 30 percent more tracking tools installed than relevant adult websites. In fact, the group as a whole used a total of 4,123 pieces of tracking technology. One site alone installed upwards of 240 tracking tools, mostly from advertisers on the site. This information is gathered and used to build profiles that can include location, race, age, hobbies, and more. Compiling and selling this type of data is legal, but contentious when minors are involved. Two companies were identified by the WSJ as selling data on teens, although they initially denied doing so. With the influence of children tied to billions of dollars in annual family spending, there is a strong impetus from advertisers to develop profiles of online shopping habits of children.

There is also the danger of predators and identity thieves getting a hold of your child’s information. Tony Anscombe, of the internet security firm AVG, predicts identity theft will be on the rise as we increase the amount of information we share about our kids online. He notes that there have already been reports of teens in the U.S. going to apply for a driver’s license only to find that someone has already used their identity for that purpose.

People who share information about kids should learn to share responsibly in order to keep both predators and thieves at bay. Some examples of good ways to protect your child’s identity are:

- Don’t give identifying physical characteristics such as birthmarks, birth defects, physical or mental issues or even height and weight.

- Don’t give genealogical details out. Often mothers’ maiden names or some combination of ancestral names are used for access to password-protected sites. Yes, your genealogy is probably a matter of public record, but don’t make it easy.

- Don’t ever publish a child’s full name, birth date or Social Security information online or in an email.

- Don’t ever tag a child in a geographic location.

Unfortunately, there are many cases where parents can’t seem to be good advocates for their children, so the government has stepped in to help—but not too much. Currently, there is a single federal law that limits the type of data that can be collected about kids, the Children’s Online Privacy Protection Act or COPPA. Since its inception in 1998, this law requires that sites directed at children under the age of 13, as well as general audience sites, must obtain parental permission before collecting, selling or divulging the personal information of a minor. A child’s personal information includes names, home addresses, email addresses, phone and Social Security numbers.

Some sites try to avoid COPPA requirements by prohibiting children under 13 from viewing the site, but this is often a false failsafe as many children simply fib about their ages to gain access. There is another bill on the horizon, the Do Not Track Kids Act, a bipartisan effort that aims to protect children from data mining operations, which was recently reintroduced to Congress. The only state legislation regarding this issue is provided in California, where Senate Bill 568 was signed into law in 2013. This law provides children under 18 with an “eraser button” that lets them delete information and posts that they regret. The problem is that it doesn’t allow them to remove posts added by a third party, such as a friend or parent. And that’s the key to the larger problem: Companies may still collect information on children that parents are releasing through social media sharing sites.

As adults, we are responsible for many of the decisions in our children’s lives until they reach 18, the age of consent. Until then, they must have an adult’s permission to do many things in their young lives. This type of supervision is essential, since minor children do not have the maturity and life experience necessary to make long-term decisions that may affect the rest of their lives. Parents who tell reams of stories online about the children’s successes and failures; share their behavioral or physical quirks; or publish thoughts about their children are making the choice to share that information on behalf of their child. While it may seem harmless to post a photo of your three-year-old in the bathtub, it may be drawing the attention of predators. Combined with the tagging and geolocation abilities of most social media software, your sharing could be making your child more accessible to identity thieves, data profilers, and criminals.

Some information is just embarrassing. For instance, little Johnny might not care if you publish that he picks his nose and eats it when he’s two. It might be a different story when his college roommate digs that information up in a Google search 17 years later, making him the focus of jokes and bullying. Parents who blog about their troubled teen’s behavior may be setting them up for difficulty when prospective employers discover information regarding their marijuana habit online years later. Adverse online information can affect entry into colleges and universities, wreak havoc with relationships and, as mentioned, skew a potential employer’s opinion. If internet posts were ephemeral and removed within a short period, it probably wouldn’t be much of a problem. However, in today’s current environment, posts, pictures, and videos are available online for an indeterminate amount of time. Kids will have to deal with the consequences of their parents’ online sharing for the rest of their lives.

Aside from privacy, there is a second matter that children of online “over-sharents” face: The psychological bruising that comes from hearing yourself maligned or made fun of by your parents.

One blogging dad, Buzz Bishop, wrote a blog post where he discussed having a favorite child. How would you feel if you discovered, years into adulthood or worse yet, as a teenager, that you were not your parent’s favorite? It could cause you to re-evaluate your entire relationship with them. It could cause you to see certain childhood events in a new light. It could break your heart. Mr. Bishop defends his behavior, saying that his article “rings true” for many parents, and, since it is real life, we should be able to converse about it. While he may be correct, there is probably a pretty good reason why most parents don’t discuss it—they don’t want to crush the spirits of their “non favorite” offspring. Who would?

Emma Beddington, the blogger behind belgianwaffling.com says that she tries to avoid writing anything about her children that would have mortified her growing up. I am not sure that any parent can know this. I have three children that are very individual when it comes to things that bother them. How can Ms. Beddington possibly predict what will bother her kids at a later date? And for what purpose would she share her kids’ antics online? So that she can achieve notoriety for herself through her child-centered commentary?

In the end, as adults we are called to be protectors of our children. We should strive to preserve that which they may not be capable of safeguarding on their own; their identities, their personal safety and even their self-esteem. When we expose our children to the internet, even in a well-meaning way, we must consider the effects this information may have on their lives, present and future. We are presenting them, not just to a few well-meaning friends or family members, but to an entire world that may be eager and ready to take advantage of them. We must not rely on our government to do our job for us; the online industry is too young to have robust legal protections. Instead, we should limit discussions, posts, and pictures of children and be careful not to discuss their activities, location, or schedules. We must take any and all steps possible to prevent predators, whether in the form of corporations or individuals, from doing anything that could interfere with their bright futures.

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

The Hacked Sony Emails: to Publish or Not to Publish?

 

Few things are more tempting for today’s journalists than covering a computer hack like the one at Sony Pictures Entertainment. Its November online-security breech generated a huge amount of scandalous, delicious details on the topics writers generally love: salary information, celebrity gossip and intrigue.

Some background: Hackers, which some say were working for the North Korean government, hacked into the computer systems of Sony Pictures Entertainment. Some estimates say the hack may have taken place in 2013 and continued through 2014. However, Sony became aware of it in fall 2014, and reports of the hack became news in late November. Through the computer-security breech, hackers obtained information about a variety of Sony activities, including employee salaries, celebrity email addresses and even medical records.

As a news organization, you have to cover the facts; a cyber attack on one of the nation’s largest media companies is front-page news. News outlets should mention that material that was released. But is there a need to include details of who said what about Angelina Jolie? Ethically, there are good reasons why no self-respecting reporter should consider publishing any of that salacious stuff.

Sharing any of Sony’s data or details brings up questions about what is appropriate to print in light of today’s journalistic standards. If you print it, have you failed Journalism Ethics 101? Countless news organizations – both large and small – went ahead and printed materials such as personal emails and contractual agreements they obtained via the hackers. Sony’s lawyers asked media outlets, including Bloomberg, New York Post, Huffington Post and The New York Times to stop reporting on the leaked emails and stolen documents.

Both legal and security experts agree: There is little that Sony can do about the information that has been publically shared. Even though Sony’s lawyers claim it will hold these news outlets “responsible for any damage or loss,” it is unlikely that they will be successful recovering damages in a court of law. Legal expert Eugene Volokh noted in his well-regarded legal blog, The Volokh Conspiracy that Sony pretty much is out of luck when it comes to legal precedent.

“Sony is unlikely to prevail — either by eventually winning in court, or by scaring off prospective publishers — especially against the well-counseled, relatively deep-pocketed, and insured media organizations that it’s threatening,” he wrote. “It seems likely that the publication of the documents isn’t likely to be tortuous. And even if it can fit within some tort (such as the improper use of trade secrets, a tort that is sometimes said to apply to disclosers of illegally released information), the First Amendment would likely preempt the tort. One can argue that, when it comes to hacking, the only effective way to deter it and to minimize the harm caused by it is to ban third-party publication of the leaks.”

But just because news media would likely prevail in court, does of course not mean that their actions were ethical. Not only did the hackers share information about Sony’s upcoming projects, such as the newest James Bond movie, they also released Sony executives’ emails. In one series of racially charged email exchanges, for example, they mocked President Obama’s imagined movie taste. These emails damaged the reputation of the Sony executives who typed them, and if Paula Deen taught us anything, it’s that once you’re labeled a racist you cannot go back. However, these emails may be the most “newsworthy” of all the hacked items, mainly because they reveal a kind of “need to know” level of information about the company, its employees and its corporate culture.

Another positive outcome of this hack is the fact that it revealed the pay disparity that still exists between actors and actresses in Hollywood. Charlize Theron found out she would make $10 million less than a male actor in a movie where they were costars of equal billing. Multiple reports say she negotiated a $10 million raise after the information was released. As a reporter who also found out after the fact that a male counterpart was hired in for thousands more than me, I can relate to the Oscar-winning actress, if only on this level. This gender gap is outrageous, and Hollywood was rightly held accountable for such a strange turn of events.

However, less newsworthy information was released as well and reaction against the media for so-called “over covering” the event came swiftly. For example, Oscar-winning screenwriter Aaron Sorkin criticized these publications and journalists in a New York Times op-ed on Dec. 14, 2014, stating that the vast number of publications writing about information revealed in the leaked documents are “morally treasonous and spectacularly dishonorable.”

Sorkin, an eloquent and powerful writer, made an arguably excellent point for not publishing the less-than-tasteful tidbits from the Sony hack:

“I understand that news outlets routinely use stolen information. That’s how we got the Pentagon Papers, to use an oft-used argument. But there is nothing in these documents remotely rising to the level of public interest of the information found in the Pentagon Papers. Do the emails contain any information about Sony breaking the law? No. Misleading the public? No. Acting in direct harm to customers, the way the tobacco companies or Enron did? No. Is there even one sentence in one private email that was stolen that even hints at wrongdoing of any kind? Anything that can help, inform or protect anyone?”

Nick Tobin is president of C-Net Systems, an information-technology company in Metro Detroit. The firm, which was founded in 1998, provides 24/7 security monitoring and maintenance for companies of all sizes in Michigan. The company also provides, as Tobin told me, “ethical hacking,” or security audits for businesses. In these hacks, C-Net Systems looks for holes in a company’s network so it can fix them.

Tobin said he was surprised when he first heard of the Sony cyberattack; it seemed highly unusual given the nature of the company’s business. His second reaction: This was a targeted attack that probably left Sony feeling used and abused. “Typically, hackers are after credit-card data. They’re not going after a movie company,” Tobin noted.

Companies of all kinds, whether they are as large as Sony or as small as a mom-and-pop shop in his hometown of Shelby Township, cannot idly sit by and wait for a cyberattack. Rather, Tobin notes that businesses need to be proactive and protect themselves. Just as salespeople might use an auto-dialer to find customers, hacker use automated means to find online addresses with holes in them. They don’t care if it’s a Fortune 500 company or a church, Tobin said. It’s all the same if the hacker can make a profit off of what he or she finds, he added.

And, for the record, even the IT guy agrees with Sorkin. If you publish that material, you’re giving the hackers not only the satisfaction of seeing Sony red faced in the public eye, but you’re also letting the hackers win.

“Actual information downloaded from their network is their property and it shouldn’t be used. The only purpose is to damage Sony and that was the purpose of the hackers,” Tobin said. “You’re feeding the people who want to hurt the company. And it’s giving the hackers more fuel to do it again. … You cannot stop them from hacking, but not publishing those details may cause them to hesitate next time.”

Because Sony is unlikely to be able to stop the documents from being published, our efforts both as citizens and journalists to avoid further scandal mongering is more important than one might think. So what can a journalist who is covering the Sony hack do? Well, the answer is layered. For starters, he or she should change his or her passwords. What does this have to do with covering a major cyber attack? It is the first important step in protecting your employer from a hacker, and it shows a concern for the greater good. For the record, I hate changing my online passwords. It’s time consuming, slightly annoying and I inevitably end up forgetting what I’ve done soon afterward. But it is worth doing if it saves your company from such a hack job.

Second, I’d rethink the kind of reporter I am if I started covering the gossipy conversations between Sony executives about what stars they like and which ones they obviously do not. There is no greater good being served here, and I’d like to think that my work has some larger purpose. There is nothing to be gained by playing the “Telephone Game” in your publication, and that’s pretty much all that was done by sharing what the hackers found. (Except in the instances discussed above.) Remember, the hackers had access to tens of thousands of documents. And yet they chose to release only the titillating pieces? That’s appealing to our base nature, and that’s not what true journalism should be.

Ethically, we should stand in question of using stolen material in our publications, whether it’s light gossip or not. For me, I wouldn’t print something that I knew had been illegally obtained, and I feel that the Sony hack stumbles too closely to this line. If my boss said that my job depended on me writing up this sort of schlock, I’d rethink my career and whether I wanted to be part of such a news-gathering organization. Sure, a paycheck is a nice thing. But so is going to sleep soundly at night and knowing that I have set a moral example for myself, my children and young journalists. You’re only as good as your name in this business, and I value my name more than one stop in my career.

Here’s a moral to this story. If you’re going to cover these security breaches, at least don’t take so much glee in someone else’s misery. Headlines like Jolie a ‘Spoiled Brat’ From ‘Crazyland,’” in The New York Post, definitely sell papers. I get it. But you’re going to earn years of mistrust from the organization that had the hack, all of its affiliates and Ms. Jolie in particular. It damages your reputation far more than a single headline is worth.

 

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

Training advanced image recognition systems and keeping AI accountable

 

Is that running woman trying to escape, or maybe chasing someone, or perhaps just trying not to be late for an appointment? Our answer might make a difference to our moral evaluation of the person’s behavior. In general, judgments about exactly what actions a person has performed can have significant moral implications. For this reason we have an obligation to make such judgments carefully and responsibly. Caution is required when we judge whether a person is arming a bomb or defusing it. The same goes for a judgment that a person is breaking in to a building, as opposed to just repairing a window.

But what if it’s a computer making the judgments? How, then, can we ensure that the appropriate caution has been exercised? The issues here are complex from both ethical and technical standpoints. I want to draw our attention to a particular sort of hazard created by new image recognition technology, and I will have some suggestions about what precautions we should be taking. But, first, a bit of stage setting is in order.

Training image recognition systems

Among the problems driving research in computer vision are various kinds of recognition tasks. Given a photograph, we may wish to have a computer classify what kind of scene it is (like a desert or a grocery store) or identify what objects are pictured (like a camel or a cantaloupe). We might also wish to have the computer draw more nuanced conclusions—specifically about the relations among various elements in a photograph. We might like the computer to tell us that a camel is drinking from a spring or that a boy is adding a cantaloupe to his shopping cart. Advances in computer vision in the last decade have begun to make automated scene classification and object identification more practical. And, very recently, new research has made headway in third sort of task. Some new image recognition systems can tell somewhat accurately how the objects in an image are related—reporting not just the what, but also the what’s happening. This progress is the result of combining two branches of artificial intelligence (AI) research: computer vision and natural language processing. The new image recognition systems integrate visual meaning and linguistic meaning in the same models, facilitating greater precision and subtlety in associating descriptions with images.

At a basic level, recent innovations notwithstanding, the new AI systems operate on the same principles as their predecessors. The first step is usually to feed the systems large sets of data. It is from this training data that a system “learns” (i.e., creates a rich model of the data). It is only once some learning has taken place that the AI system becomes useful. (Whether the learning process continues once the system is in operation depends on the specific system and its implementation.) In the case of image recognition, the training data consist of scores of images paired with textual descriptions. Different data sets include different images, and the form of the descriptions may vary as well—from single-word descriptions to multi-sentence paragraphs.

A computer vision system is only as good as the data used to train it. Ultimately, my ethical concern is about the provenance of training data and how that affects the output of the system. It would be all too easy for an organization to come to rely on a computer vision system that, because of the data used to train it, is not appropriately trustworthy.

So, let’s think about where training data—sets of images paired with descriptions—might be available. It is tempting to think we have an embarrassment of riches. The Internet, from professional media outlets to social media, provides a never-ending stream of captioned images. It is Big Data par excellence. Consider how e-commerce websites like Amazon and eBay analyze their unceasing streams of consumer behavior data in order to train their systems to make more intelligent product recommendations. Similarly, to train our image recognition systems, one might think that we just need to point them at the streams of captioned photos that perpetually pour from the likes of Twitter, Flickr, Instagram, etc.

But a bit of reflection shows this approach is a non-starter. After all, why do people caption images in the first place? The goal is certainly not to give plain and literal, yet comprehensive, descriptions of the contents of the photos. Instead the goal is to tell us about the things not pictured—like important background information—that make the photo interesting. If a photo shows a chemist in her lab, the caption is likely to say who she is and what she studies. It will not say anything like this: “A woman with goggles and a white coat lifts a glass vessel containing a blue liquid.” Such a caption would be useless to us; we can notice all this (and much more) from a quick glance at the photo. But this is exactly the kind of caption we need paired with our image if it is to be part of our training data. The point, then, is that the training data we need for image recognition systems—unlike paradigmatic big data applications in which the relevant data sets continually accrete through the everyday course of events—must be artificially created and collected.

Artificial creation of training data is a daunting task, but it’s not quite as difficult as it might initially seem. Researchers can simply hire people to describe photos. And with crowdwork services—like Amazon’s Mechanical Turk (MTurk)—which crowdsource the completion of large sets of microtasks, it is fast and inexpensive to create large sets of training data. (The platforms that organize and profit from crowdwork raise other ethical concerns, since many of the actual workers make less than minimum wage. However, at least for present purposes, we can set these worries aside.) Researchers can define tasks and advertise them within MTurk, and then human workers (the “Turkers”) find them and complete them. Back in 2009, computer vision researchers at the University of Illinois used MTurk to acquire human-generated descriptions for over 8000 images in less than twelve days and at a cost of less than $1000.

Why labeling actions is perilous

Crowdwork removes one obstacle in the way of training image recognition systems. However, it leaves in place some hazards, specifically related to ways that bias or error may creep into the data models. Furthermore, the state-of-the-art image recognition systems which are designed to identify actions and relations pose a greater risk than the earlier systems which just label scenes and objects.

We can see the problem by attending to several philosophical points regarding what is distinctive about actions: First, what action a person has performed is not determined entirely by the person’s overt behavior. Correctly describing an action requires knowledge of the intention with which the action was performed, and intentions are not directly observable. Furthermore, a particular action may have several different mutually correct descriptions, corresponding to different levels of description of the agent’s intentions. For example, of a man slicing a tomato, in addition to saying “he is slicing a tomato,” it may also be correct to say “he is preparing ingredients for a meal” or “he is testing the sharpness of his knife” (or maybe both). Judging which among the many possible descriptions are correct requires knowledge of the agent’s intentions.

Some of an agent’s intentions are easier to ascertain than others. The conclusion that a man intends to be slicing a tomato may be much more readily inferred from a photo than the conclusion that he intends to be preparing a meal. Responsibly imputing intentions on the basis of a photo requires avoiding inferential overreach. Although there may be no sharp boundary between judging responsibly and going too far, it is clear that there is such a thing as going too far. Indeed we have a derogatory term for a person who, on the basis of limited observation, makes judgments about others’ intentions: we call such a person presumptuous. Respecting a person usually means assuming that she is the ultimate authority regarding what her intentions were. That is, we generally take a person’s word about what she’s up to. The presumptuous person’s offense is the failure to grant this sort of respect to those she judges.

Just as we try to avoid presumptuousness in our personal interactions, we should guard against presumptuousness perpetrated by our AI systems. To prevent presumptuousness in our image recognition systems, we must make sure that presumptuousness is not trained into the system in the first place.

Importantly, some kinds of presumptuousness are worse than others. When the presumptuousness is not a perfectly general and indiscriminate tendency to infer too much, but rather manifests a particular pattern of overreach, it counts as bias. It’s not hard to imagine Turkers imputing different intentions to a person in a photograph depending on the apparent gender or race of the person. At the point where presumptuousness becomes bias, we have not only disrespect toward individuals, but the potential to disadvantage entire categories of people.

Of course, whether an instance of disrespect ever materially affects the person disrespected, or whether a systematic bias ever genuinely disadvantages a group of people, depends on where it occurs. Some applications of computer vision technology have much higher stakes than others. The higher the stakes the less we can tolerate error. For instance, we may rightly be more tolerant of error (or even bias) in the choice of which advertisements to show someone than in the identification of suspects in a criminal investigation. This brings us to the last major point I want to emphasize.

Responsibly matching AI systems with our purposes

It is an unfortunate but inevitable fact that we cannot always avail ourselves of precisely those tools which are most appropriate to our goals. Of necessity, we rely on the computer systems (and data) we have. This may, quite naturally and unintentionally, produce a mismatch between the quality of our AI system’s judgments (which are ultimately based on the quality of its training data) and the quality of judgment appropriate to the application in question. Such a mismatch may be especially pernicious if it is embedded deeply in a larger system—for instance, where the outputs of our image recognition system are inputs for further processes, without any human supervision. In such a case, undesirable presumptuousness or bias may never be discovered, but its effects may be felt nonetheless.

To be clear, my worry here is about how new image recognition technology may be applied, not about the research responsible for the innovations. The researchers tend to be very conscientious about the quality of the data they are collecting and using. In particular, the University of Illinois team mentioned earlier has documented the quality control measures they used as they assembled their training data. One such measure was a qualification test required of prospective Turkers before they could start labeling photos. The qualification test included questions to indicate whether a worker could identify “good descriptions.” According to the test instructions, a good description “should only talk about entities that appear in the image” and “should not make unfounded assumptions about what is occurring in the image.” The team also took pains to compare the effectiveness of this means of quality control with alternative measures (such as post-hoc quality control measures intended to find and delete low quality responses). Notably, much of this research is taking place in academic settings, and so most of the work—including the algorithms, the data, and the methods of data collection—is readily available for review and scrutiny.

Now, if I were in the market for an image recognition system, and I were looking at one that had been trained on the data set just discussed, I would be in a relatively good position to assess whether its standards were a match for my specific purposes. That is because I can examine the training data. This sort of situation is desirable; it’s what we should be seeking.

But as such systems move from the research phase to practical implementations—more specifically, out of the academic world and into the corporate world—it will likely become more difficult to subject the training regimen of the systems to such scrutiny. If so, this will be unfortunate and potentially harmful. And the problem will be compounded if the avoidance of presumptuousness and bias is not among the goals of the organizations adopting these systems.

For a final illustration, suppose a private security company is hiring a human consultant to monitor and report on the actions of a group of people—perhaps to figure out whether to classify these individuals as potential threats. In such a situation we should hope that (at the very least) the consultant’s resume will be examined and her references checked, to provide some assurance that she will handle this sensitive job responsibly. When adopting a computer system for a similar task, we should demand no less. Keeping AI within the bounds of accountability requires it.

 

Owen King

Owen King is the NEWEL Postdoctoral Researcher in Ethics, Well-Being, and Data Science in the Department of Philosophy at the University of Twente. His research is primarily focused on well-being, from both theoretical and practical perspectives.  He also investigates ethical issues raised by new computing and data technologies.

Pandora’s Box: The Ethics of Self-Guided Weaponry

 

As technology evolves, so too must our ethical frameworks. It’s a point that’s been made a lot in recent times. But with each passing moment, with each new development, the point gets hammered home with increasing lucidity. A recent experiment performed by U.S. Government may be the impetus in a new trend involving artificial intelligence in weaponry. An Air Force B-1 bomber launched a missile in Southern California. Human pilots initially guided the missile; however, halfway to its destination, communication with the device was severed, leaving the computer systems to decide which of three ships to attack. In fact, this weapon, the Long Range Anti-Ship Missile prototype, is designed to operate without human control.

This experiment is a notable marker in the history of artificial intelligence. AI is beginning to make serious headway in many fields: medical diagnostics, stock trading, and even gaming. Though, in a way, autonomous technology, specifically in relation to weaponry, is a direct product of the Cold War. The Tomahawk cruise missile, now out of commission, had the ability to hunt for Soviet vessels without human guidance. In December 1988, the Navy tested a Harpoon anti-ship missile with self-guiding capabilities with disastrous results. Launched from an F/A-18 Horner fighter jet from the USS Constellation, the missile mistakenly targeted an Indian freighter that had wandered onto the test range. One crewmember was killed. Nonetheless, the Harpoon remains in use.

While currently, armed drones are typically controlled by remote human pilots, arms makers are working to create weapons guided by artificially intelligent software. Rather than humans deciding what to target, and in certain cases whom to kill, this software will have the ability to make such decisions without the aid of humans. Israel, Britain, and Norway already employ missiles and drones that perform attacks without the guidance of humans against enemy radar, as well as tanks and ships. While the details of such advancements are kept secret, it’s clear that a new kind of arms race is taking place.

Of course, there are rather complex ethical dilemmas that arise from this development. Critics argue that it will become increasingly difficult for humans to control artificially intelligent software. And furthermore, by eliminating a degree of human oversight, are we more likely to go to war? Concerns have been raised that the ease with which we opt for war as an option will definitively increase, once the protocol becomes as simple as activating a set of computers.

Are these cursory anxieties caused by the stigma associated with artificial intelligence, or are they legitimate concerns? It’s difficult to say. Of course, pop culture hasn’t done any favors for the public image of artificial intelligence. But if leaders in the field are any indication, there is something real to fear in the advancement of AI. Elon Musk, technology entrepreneur and AI investor, recently voiced concern: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” And it would appear that government bodies are attempting to do just that. A special UN meeting in Geneva was recently held to address the issue, with a general consensus of nations calling for increased regulation and oversight in order to prevent violations of humanitarian and international law. Such regulations are already in place in the US; for instance, high-level authorization is required before the development of weapons with the ability to kill without the aid of humans. However, with the rapid increase of technological advancement, can our regulations keep up?

In his 2013 TED Talk, Oxford AI researcher Daniel Dewey notes, “Weaponization of AI is ongoing, and accidental harms can arise from unanticipated systemic effects or from faulty assumptions. But on the whole, these sorts of harms should be manageable. Today’s AI is not so different from today’s other technologies”. Ethically, there isn’t really a problem here, at least not yet. But a solid case can be made for the necessity of someone, some human, behind the wheel; e.g. we need human soldiers making decisions in real time, not despite our emotional nature, but indeed because of it. Machines currently cannot comprehend the consequences of their actions. They cannot feel remorse. They cannot, currently, distinguish right from wrong. Machines can only do what we tell them to. They can only distinguish targets.

Though it’s difficult to say at this point whether this lack of emotion is a good or bad thing. After all, soldiers come home with serious psychological problems due to the violence they face in war. Machines cannot be traumatized. And from a utilitarian perspective, weapons aided by artificial intelligence may actually serve to decrease the incidence of civilian casualties. For instance, Britain’s ‘fire and forget’ Brimstone missiles have the ability to distinguish tanks from cars and buses without the aid of humans. In addition, Brimstone missiles are able to communicate with one another, sharing information about targets and surroundings. But what happens when artificial intelligent computers begin to act in unexpected ways?

AI, in its current form, operates by systems and rules dictated by humans. So, it follows that if our missiles hit the wrong targets, thus performing ‘morally reprehensible’ actions, the problem lays in the rules and systems dictated and designed by humans. So on one hand, machines follow rules and codes based on logic inherent within their systems, whereas humans can have their judgment clouded by emotion. Where computers follow logic, people can be reckless and unruly. Computers follow commands. On the other hand, sometimes it is necessary to disobey commands. But then again, disobeying commands is something people also have trouble with in the face of authority. After all, it wasn’t an artificially intelligent machine behind the massacre at Mai Lai. It wasn’t self-guiding manmade machines that were responsible for the Holocaust. It was people.

The problem is that the study of AI is such a fledging field of discovery that we don’t yet know how advanced forms of AI will react in any given scenario. That is not to say that we ought to halt the progression of such technologies—it’s more so that we should proceed, as Musk suggests, with immense caution. Currently, there is no morally relevant difference between a missile’s software deciding which target to destroy, and an individual making the decision. If you follow the logic of the computer’s choice, it was actually never up to the computer to begin with. The responsibility belongs to the human men and women designing and programming these weapons, and the key to all of this is how the computers are designed to make decisions. What if we could program varying degrees of oversight into the weapons so that general codes such as, say, the Geneva Conventions, are inherent within the weapons themselves? Well, that could save lives and prevent tragedies.

It’s clear by this point that autonomous weaponry is a nuanced topic with no clear outcome. We have a responsibility as a society to figure out where the line is drawn. An ethical arms race must match, tic for tack, each of the advancements in the autonomous weapons arm race. And furthermore, while the path is certainly fraught pitfalls and risks associated with the advancement of AI, we as a society now have an opportunity to make something better than ourselves. Indeed, the very fact that we are more invested in the weaponization of artificial intelligence than, say, more benevolent undertaking indicates that there are serious, deep-seated problems with our values. But on the whole, technology made the world better. And with a guided advancement of AI, we can essentially upgrade our own moral codes.

Of course, programming ethical frameworks into artificial intelligence technology is easier said than done. The issue becomes about the source of morality, which unfortunately, is an inherently nonsensical quandary. Morality is a construct. It does not exist outside of our understanding of it. Human morality is expressed with such brevity, in such myriad ways in different historical contexts and cultures, it’s impossible to discuss without bottlenecking the subject with cultural and historical bias. As such, it’s difficult to talk about in a clear and concise way, and even tougher to parse out when it comes to programming. But while morality as a societal construct takes many forms, what can be said, if anything, is that it serves as a regulatory function for humans. We can start from there.

But there may be an even deeper, moral concern at play. Without directly experiencing the consequences of war, are we desensitizing ourselves from the innate horror thereof? In turn, without experiencing said horror, are we sheltering ourselves from the heart of the underlying problems that cause war? Perhaps the real question is as follows: is war essentially a human undertaking? While there are abundant examples of human savagery, atrocities such as the Holocaust are outside the norm of human behavior, even in wartime. The very fact that one can point towards these incidents and evoke disgust indicates that humans, as a general rule, view such actions as repugnant. Thus, such events are not representative of society as a whole; indeed, they are the epitome of society. The fundamental trick to all of this is making what is innate to us, innate to our machines. If we can program distinct ethical frameworks within our technology, we’ll have passed on a higher standard to new generations than we ever could have hoped to see in our lifetime.

At the heart of this discussion is the distinction between humans and machines. But it turns out that all of the assumptions behind this dichotomy are unsound. After all, the human brain really is just an intensely complicated computer. In turn, artificially intelligent software is really just a manifestation of humanity in another form. Ethically, there is no real distinction. In the coming decades, we will make a choice about the things that we value most, and we will instill that into our software. That choice will directly influence how humanity evolves. So what do we value?

 

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

Virtual Reality Journalism

 

It’s 1955. On CBS, a deep-voiced announcer backs a jittery reel of black-and-white stills.

“October 8, 1871,” he intones with high drama. “The Chicago Fire.”

And then the hook: “You. Are. There.”

 

The CBS News production then cuts to Walter Cronkite at the anchor desk, aside a microphone, script in hand. “Walter Cronkite reporting. October 8th, 1871. In this year and month, we are suffering in Chicago…”

“You Are There” was a CBS reenactment series through which anchor Cronkite would transport viewers to a historical event and treat it as though he were reporting it live. Breathless on-scene reporters, actors serving as famous sources and fake footage were all intended to put the audience in the scene of events like the Hindenburg disaster, the Revolutionary War or the Great Chicago Fire. Though easy to criticize today as staged and hokey docudrama, “You Are There” was nonetheless a novel effort during the heady experimental days of early television news.

We are in those heady days again, still seeking to carry our audiences to scenes to help them experience – and feel – the news of the day. But today, the experimentation comes in the form of a headset, a virtual reality approach that puts its wearer “in” the environment. And with that transport, come key ethical questions about representation, privacy, intellectual property and media effects.

The Technology
Virtual reality is not new. Beginning mainly in the 1960s with flight simulation, VR – also known as “augmented reality” or “immersive multimedia” – picked up speed with an MIT project mapping Aspen, Colorado, through video. The concept is simple: use a headset device to simulate a physical environment through sensory experiences, most commonly sight and sound.

Development in virtual reality today is driven largely by the gaming sector and consumers’ apparently insatiable appetite for those experiences. Industry leader Oculus, known primarily for its emerging Oculus Rift device, envisions a world of consumer-grade headsets that put a gamer into the experience of “Call of Duty” or “Grand Theft Auto.” That vision took a major step forward in 2014 with the arrival of the Samsung Gear VR, a virtual reality headset powered by smartphone, rather than the expensive desktop computers previously needed for processor-heavy simulations.

These headsets provide two kinds of virtual experience: animation or video. Both attempt to recreate reality and allow the user to walk virtually through scenes. With an Oculus Rift strapped on, a user can go into the animated environment of a Tuscan estate. The scene tracks with body movements, as users turn their heads or use keystrokes to ascend a staircase or peer over a stone wall. Through the Gear VR, they can go into a performance of Cirque du Soleil created through 360-degree video, watching various performers as if sitting on the stage itself.

But VR potential extends far beyond gaming and entertainment. In January, the United Nations debuted an emotional immersive experience following a Syrian refugee girl, “Clouds Over Sidra.” The piece is viewable on the Gear VR via its Milk VR delivery system, a.k.a., “the YouTube for Virtual Reality.” To appreciate the differences inherent in an immersive experience, watch a reporter explore a refugee camp through the Gear VR.

With that view, the potential for journalism is almost immediately apparent. Though its commercial prospects pale in comparison with gaming, immersive journalism is already in production. And in fact, one of the leaders in this space, documentary filmmaker Nonny de la Peña, has earned acclaim with Syria as her immersive focus, as well. Recently called “The Godmother of Virtual Reality,” she is working on all aspects of VR in journalism, including hardware development, reporting and virtual rendering.

 

As de la Peña explained to the BBC, immersive approaches advance journalism by putting the audience into a place and giving them a sensory connection with it.

“It creates a duality of presence. You know you’re ‘here,’ but you feel like you’re ‘there’ too. And the experience is much more visceral. It’s really a kind of a whole-body experience and is very unique – different than radio, than television, than any other kind of format for experiencing a story,” de la Peña says.

Dan Pacheco, professor and Horvitz Chair in Journalism Innovation at the S.I. Newhouse journalism school at Syracuse University, is experimenting with these technologies with students. He says that while the world is excited about what VR will do for gamers, he’s thrilled by the potential for transforming citizens’ knowledge of the world around them.

“We no longer have to be limited to telling stories. We can take you into an experience,” Pacheco says. “No matter how many times you tell a story, people don’t feel it. Once you start to show it, it’s a little better but still far, far away. Once you can move someone into an experience, that’s really the key.”

He has first-hand experience, having served as a consultant on Gannett’s first foray into virtual reality – a tour of an Iowa farm as part of its “Harvest of Change” series. The VR approach puts viewers into the farm environment, enabling them to navigate around barns and approach giant tractors. Visual cues open doors to more information on elements of the farm. The experience is best using the Oculus Rift headset, but others can get a sense of the experience using the Unity player on a web browser.

The place-based feeling of VR is particularly important for stories depending on a sense of space and perspective. The Reynolds Journalism Institute (RJI) at the University of Missouri experimented with an animated VR approach to cover eyewitness statements in the Michael Brown shooting in Ferguson, Missouri. A walk through their output, helmed by graphic journalist and RJI fellow Dan Archer, shows some of the possibilities of VR for journalism.

 

Each witness’ perspective is virtually apparent as the user experiences his or her statements. This gives the reporting an angle that’s not quite possible with text. Video would capture the sense of space, but cannot give the user the same control over movement. The package is rudimentary, as the technology is nascent, but provides a window into what will be possible in the future.

“It comes down to a much-abused term that’s being bandied about these days: empathy,” Archer says. “I first got into graphic journalism as a way of placing the reader at the heart of a news story by using art to visualize the accounts of my interviewees from their first-person perspective. That was several years ago, and the technology … has at last almost caught up to speed.”


Explore the Ferguson virtual reality project (requires download of Unity player).

Implications for Journalism Ethics

As with all emerging media platforms, VR presents opportunities, but also demands serious ethical consideration. In some cases, traditional ethics contested over decades help inform our judgments. But in others, the very immersion itself prompts questions we have not yet tackled in journalism.

How real is the virtual?

Ethical questions begin with the basics. When constructing an animated virtual reality, what steps can be taken to make it as real as possible? What are the dimensions of surrounding buildings? What are the colors and shapes of people in the scene? What’s the relative perspective between the user and the trees around her?

In the case the Ferguson package, the VR rendering shows a blue sky with puffy white clouds. But video from the scene shows a more gray, dreary day. Does this matter for the story? Would it change the audience’s understanding? All of these questions must factor into animated recreations. But they’re also issues in 360-degree video. One would imagine it to be less fraught with potential for distortion, yet video that’s captured in 360 degrees still has to be edited in two dimensions. This can interfere with rendering reality as it was caught on the original video.

It’s important to recognize, however, that virtual reality does not introduce these concerns in significantly new ways. De la Peña faced criticism early in her work from those who claimed VR journalism was too subjective and thus could not be ethical. Yet when operating in text, still, video, audio or interactivity, we’re continually making judgment calls about what to cover, what to render and how to do it. VR certainly poses issues of subjectivity, but they are extensions of critical questions we need to be asking ourselves in all platforms.

Archer, in fact, notes that VR as a form holds promise for helping users recognize subjectivity because the choices are so apparent in a graphically rendered environment.

“All we can do is be open and honest with readers, and highlight what we chose to include and exclude.” Archer says. “My hope is that using this process we can lean more heavily on readers to explore what the notion of ‘truth’ is using this new virtual frame of reference.”

Whose reality is it?

Just as we must with any text, video, audio or interactive story, we must wrestle with the sourcing in virtual reality packages. The Ferguson piece lays this bare. Source perspective, motivations and biases all play roles in the creation of the virtual environment. Certainly the number of feet between a window and a road can be measured, scaled and recreated in VR. But where a person says she stood and what she says she saw are less certain. Yet when they are rendered in a virtual environment, they are necessarily made more real for the audience.

Who owns a reality?

The “Harvest of Change” series, the virtual tour of the Iowa farm, quickly raised questions of intellectual property and trademark. Does recreating an exact animation of a trademarked tractor design infringe on that design? Does it do so with iconic public buildings? With emerging technologies, we often find law lagging behind what’s possible. Ethics must fill the breach, as we weigh others’ rights to their creations and the implications of our own recreations.

How much does this cost?

The ethics of economics matter in an age of news media disruption. The “Harvest of Change” package – while fascinating and exciting – came at time when Gannett laid off dozens of employees at its flagship paper. Although Pacheco says the cost of the VR package was not exorbitant (he’s barred from disclosing exact figures), expenditures on experimentation always come at the expense of other elements of news gathering. This context, however, demonstrates that funding experimental platforms and approaches may be one of the most justifiable expenditures of strained resources – within reason, of course. Virtual reality is expected to capture an audience through gaming that has been particularly elusive for news media: teens and young adults. These virtual platforms may be an ideal way to stimulate their interaction with news, serving them as citizens. But this requires a focused, thoughtful strategy, rather than merely chasing the latest toy. It also requires that we consider for whom we are developing these technology uses and whether we are leaving important audience segments out. Early speculation was that the cost of an Oculus Rift would make it a rich kid’s toy, at best. Yet the Gear VR is surprisingly affordable and because it runs off a mobile phone, its potential is open to a more diverse set of users who would otherwise lack consistent access to a desktop machine.

Whose expectations matter?

Privacy is clearly one of the largest ethical considerations for journalists with immersives, especially 360-degree video. As with drones capable of low-cost capture of video, still images and sound from the air above both public and private property, use of video and even animations for virtual reality poses the risk of invading privacy. Law is sorely behind technological development in this arena, so ethics are more crucial than ever.

“We’re not that far from drones flying all over the place and capturing everything,” Pacheco says. “Pandora’s Box opened back in the ‘70s and we’re not going to be able to close that. It’s going to be interesting to see how people use that for good and how they use it in morally and ethically questionable ways.”

Privacy, especially in law, is largely premised on the protection of personal space, and we punish intrusions into that space. We ask where a person has a “reasonable expectation of privacy” and generally conclude that such expectations are far stronger in personal spaces than in public ones. A woman has a reasonable expectation of privacy in her kitchen, but not in, say, a Starbucks in downtown Chicago.

But privacy is not merely a question of law. As media ethicist Cliff Christians notes, privacy is also a fundamental moral good. While privacy is essential for individual flourishing, protecting such human development is ultimately a common good.

“A private domain gives people their own identity and unique self-consciousness within the human species … Privacy as a moral good is nonnegotiable because controlling our life's core is essential to our personhood,” Christians writes.

In an age when technology enables a transformation from simple observation to sophisticated surveillance, journalism must wrestle with the implications of this possibility. Virtual reality that relies on video capture, for instance, poses the problem of incidental capture. Imagine an immersive experience designed to transport users to a Liberian hospital treating patients with Ebola. Although currently limited in scope, technology will quickly be able to transmit live 360-degree video from such a hospital. Even if the clinicians or patients in focus consent to their story being used, the camera will pick up the full scope of the scene and enable users to move themselves in for closer looks. We must consider the privacy of the people within that scene.

And while VR would commonly be assumed to be more easily justified in a public setting – say, a street – the sophistication of the capture will also include spaces normally deemed private – say, a person’s living room windows. As we have struggled to conceptualize and deal with the privacy implications of emerging technologies like Google Street View, we will have to contend with the invasiveness of virtual reality. But the stakes increase with its use in journalism, specifically because news is so often about capturing people’s most difficult moments.

When is the virtual too real?

While some evidence is emerging that virtual reality may be useful as a treatment for post-traumatic stress disorder, Pacheco and others worry about the effects of putting people in stressful situations through VR. The concern is that renderings that are sufficiently real may trigger memory as though the user actually experienced a place or event. No one could mistake Walter Cronkite’s staged Chicago Fire coverage as real. But consider a virtual reality headset with video images of fire, plus the sound of crackling and gusting, plus the thick smell of smoke, plus the sense of growing warmth. These sensations have far more potential to induce trauma.

VR coverage of war, torture, rape and other violence will prompt searing questions about lasting consequences of consuming journalism that eclipse our current research on media effects. All of these considerations must factor into uses of virtual reality for journalism, keeping subjects and audiences more firmly in mind than the mere possibilities the technology affords, Pacheco says.

“The most important thing that we need to keep in mind with immersive and experiential media is that because people feel like they’re somewhere else, you always need to keep the experience of the user as the most important ethical consideration.”

 

Kathleen Bartzen Culver

Kathleen Bartzen Culver is an assistant professor in the University of Wisconsin-Madison School of Journalism & Mass Communication and associate director of the Center for Journalism Ethics. Long interested in the implications of digital media on journalism and public interest communication, Culver focuses on the ethical dimensions of social tools, technological advances and networked information. She combines these interests with a background in law and the effects of boundary-free communication on free expression. She also serves as visiting faculty for the Poynter Institute for Media Studies and education curator for PBS MediaShift.

Google Glass: Flawed Technology or Flawed Ethics?

 

On January 15, Google announced that it was discontinuing sales of Google Glass. The wearable computer on an eyeglass frame hailed and harangued by enthusiasts and critics alike during its short existence is on hold indefinitely.

On a timeline, Google Glass doesn’t fill much space: In 2010, it became the first of many projects housed in Google X Lab, a secret experimental space for researchers to develop ideas. In April 2012, the company made Glass a Google+ account and posted a conceptual video, effectively introducing the product to the world. That June, Google showed it off at a conference in San Francisco during a demonstration that included skydivers and stunt bikers wearing the device.

Afterward, Google allowed conference attendees to preorder the Glass “Explorer” edition for $1,500. About 2,000 people did, though they didn’t start to receive the toy until April 2013. In May of that year, 8,000 members of the public who won the opportunity to buy Glass through Google’s #ifihadglass contest also received it. A year later, in May 2014, Google opened the Explorer Program up to everyone that could afford the price tag.

In the end, Glass was available to anyone who wanted it for only eight months. But during its brief lifespan, the product permeated society in remarkable ways. It was simultaneously an intriguing vision of the future, a derided symbol of elitism, and a controversial topic of ethical debate.

In a blog post, Google wrote that the Glass Explorer Program closed so that the company “can focus on what’s coming next” and promised “you’ll start to see future versions of Glass when they’re ready.”

The phrase “when they’re ready” speaks the loudest when considering the downfall of the first iteration of Glass: Was it ready? Or was the product released too soon?

There are multiple angles by which to approach this question. First, consider the product’s design flaws. Glass Explorer was far from a perfect device when it was sold to those conference attendees, and it remained so a year later when the public got its hands on it. Limited app support, unreliable connectivity, and a short battery life were common complaints from users. Some thought the frame was uncomfortable; others thought it was awkward to wear the technology in public.

With Glass, early users could take photos and videos, send texts, and participate in Google Hangouts using a dashboard appearing in front of the eyes and voice commands. They could make calls, get directions, and see personal reminders, the time, weather, and news headlines.

Joanna Stern, a reviewer for ABC News, wrote in June 2013 that Glass features “modify the smartphone app experience, they don't yet offer something unique.” She continued, “I want to do the things that Google showed in the original demo video. I want to be able to look at the subway station and know if there is a 2 or 10 minute wait for the next train so I can decide I should take a cab instead. I want to look at the pasta I am about to inhale and know more about the ingredients or caloric information.”

Yet, the same reviewer also said she would stick with Glass. She recognized that the platform was new and still being tested.

Joshua Topolsky, writing for the Verge, had a similar takeaway: “Is it ready for everyone right now? Not really. Does the Glass team still have huge distance to cover in making the experience work just the way it should every time you use it? Definitely. But I walked away convinced that this wasn’t just one of Google’s weird flights of fancy.”

Users seemed to understand that the product they bought was a prototype, an experiment being done in real time (hence the “Explorer” name for the program). But is it fair for a company to expect consumers to do the dirty work for it? To test out kinks that could have been resolved back in the laboratory? And to pay for that experience?

Say a shopper bought a pair of regular glasses that turned out to be not fully functional. Perhaps in certain light, images appeared blurry. Or the frame didn’t sit level over brow. The shopper would return the glasses, expecting a full refund for a product that didn’t do its job. But that expectation doesn’t hold with new technology, and Glass is no exception, partly because users didn’t really know what to expect of it. And they didn’t assume it would be perfect.

Consumers expect technology – from laptops to smartphones – to improve over time. They know that early versions will be costly and inferior to later iterations. But they’re willing to pay up and withstand shortcomings so that they can be pioneers. To be fair, Google issued monthly software updates to Glass users, so they did have access to the product’s latest improvements.

That said, maybe Google’s biggest mistake wasn’t introducing an imperfect product. Maybe the company erred most in the way it marketed Glass.

On February 4, New York Times technology columnist Nick Bilton published an article that put a narrative behind the demise of Glass. The story described how most Google X engineers believed that the Glass project was not ready for the public. But one of the company’s founders disagreed, and the Explorer program was born. “The strategy backfired. The exclusivity added to the intense interest, with media outlets clamoring for their own piece of the story. As public excitement detonated, Google not only fanned the flames, but doused them with jet fuel,” Bilton wrote.

Now, the company has to backtrack.

It’s important to note that Glass is not the first product that Google has halted. RSS feed aggregator Google Reader closed down in 2013 after eight years when popularity with users declined. Collaborative editing platform Google Wave fizzled less than a year after it was released to consumers. These examples are not anomalies. Failure is part of the high risk-high reward world of business. It’s expected.

But failure seems different with Glass. The product was presented to consumers with so much fanfare and promise. Think about the unveiling process one more time: Google spent two years generating buzz for a product that wasn’t perfected, the ethical considerations not thought through. People had to wait months to find out if they had won a contest for the privilege of forking over $1,500 to purchase Glass. Is a company ethically obligated to follow all that allure up with a high quality product, one that can actually do what its marketing messages suggest it can?

It’s also worth noting that Google’s marketing campaign passed over the privacy concerns implicit in Glass capabilities. Addressing the product’s ethical concerns in advance of its launch, rather than dealing with problems as they arose, may have improved Glass’s reception. For instance, many reviewers called for a sort of rulebook, a set of etiquette guidelines, to accompany the technology.

CNET senior editor Scott Stein explained how it felt wearing Glass on a train in New Jersey. “People stared, but cautiously. I didn't want to look at them. I didn't want to make them feel uncomfortable. But there's no way for a camera conspicuously hovering on your glasses to not generate some level of social discomfort, no matter how elegantly designed.”

The device’s camera and video capabilities are so subtle that privacy concerns are inevitable. While it’s possible to take photos and record audio or video on the sly with a smartphone, it’s even easier with Glass, so much so that many businesses, from bars to hospitals, banned the product before it was even released.

Perhaps when – if – Google releases a new version of Glass, there will be evidence that the company has learned from its previous ethical mistakes. They could demonstrate this with a more refined product, a clearer marketing campaign, and a privacy plan that’s already been worked out with legal experts and lawmakers.

Where would Glass be if those strategies had been thought through the first time around? Even if Google was not able produce a stronger device, at least it would have an audience that was ready for Glass, that understood the product’s purpose and felt comfortable using it, rather than perceiving the technology as gimmicky, unnecessary, and ethically concerning.

What’s next for Glass? Right now, all the public knows is that the product won’t be developed by Google X anymore. It will be redesigned from scratch, with different leadership and a renewed effort to keep experimentation on the inside. Google’s competitors, like Microsoft and its HoloGlass, should take note.

 

Good Reasons for Hiring a Hacker

 

Recent high-profile cases of hacking include the breach of Target’s customer databasesthe acquisition and dissemination of nude celebrity photos, and the publication of confidential information from Sony Pictures. If these are the paradigm cases that inform our conception of hacking and hackers, then the very idea of hackers for hire may be frightening. So, it’s not surprising that a website designed to help ordinary people hire hackers would get some attention.

Hacker’s List is a website modeled after likes of oDesk and Elance, websites that match freelance workers with clients that need their help. The business model for all these sites is the same: A client posts an ad describing the sort of service desired. The freelancers or hackers apply for the job. Then the client chooses one among the bids. Hacker’s List differs from the other sites primarily in the type of service sought by clients. Instead of looking for writers, graphic designers, translators, data analysts, etc., people posting on Hacker’s List want people who can access, bypass, disable, or alter computer systems or the data they contain.

The buzz around Hacker’s List began with an article in the New York Times, which declared, “The business of hacking is no longer just the domain of intelligence agencies, international criminal gangs, shadowy political operatives and disgruntled ‘hacktivists’ taking aim at big targets. Rather, it is an increasingly personal enterprise.” I think the comparison here is potentially misleading. Likening small-scale hacking to familiar, high-profile cases obscures some of the distinctive and interesting reasons some people may have for hiring hackers. I think that, as digital technology becomes ubiquitous, our evolving relationship with technology gives rise to a distinctive set of reasons that favor hacking and hiring hackers.

Disruptive vs. constructive hacking

As a preliminary point, it is worth noting that hacking is not necessarily destructive or even disruptive. Preoccupation with the most high-profile cases—Target, nude photos, Sony, etc.—can cause us to forget this.

Historically, a hacker is a technologist who has a particular sort of adventuresome and enthusiastic orientation to the creation and improvement of technology, especially (nowadays) computer systems. The term ‘hack’ gained currency at MIT in the 1950s and 1960s among members of the Tech Model Railroad Club. Describing this group’s terminology in his book Hackers, Steven Levy explains that a hack was “a project undertaken or a product built not solely to fulfill some constructive goal, but with some wild pleasure taken in mere involvement.” Along the same lines, legendary hacker Richard Stallman says that a hacker is “someone who enjoys playful cleverness, especially in programming, but other media are also possible.”

Hacking in this traditional sense is constructive rather than disruptive. As things stand right now, jobs for constructive hackers do not seem to be the standard fare of Hacker’s List. However, we should not overlook the potential need for constructive hacking. Perhaps you want to build and install an unusual kind of intercom system for your home. Or maybe you need some custom software to automate a part of your eccentric record-keeping system. For these sorts of projects, it would be totally reasonable (and, of course, ethically unproblematic) to try hiring a hacker.

I expect that nothing I’ve said so far will be particularly controversial. Constructive hacking was never the worrisome sort of hacking. The worry comes with disruptive hacking. Indeed there are many instances of disruptive hacking—both at the small, personal scale and at the large, organizational scale—that are highly unethical. What I am keen to show, though, is that not all instances of disruptive hacking ought to be evaluated the same way.

Circumstantially motivated hacking

Focusing now just on disruptive hacking, we can draw a distinction between hacking that is motivated by some functionality of the to-be-hacked technology itself and hacking that is motivated by circumstances external to the technology. I think it is the latter that most people have in mind when they think of disruptive hacking. For example, the hackers responsible for the breach of Target’s customer database were motivated by monetary profit. It is doubtful that anything besides opportunism led the hackers to specifically zoom in on Target’s technological infrastructure. If there had been a copy of the data in a pile of disks sitting on a park bench, presumably the would-be hackers would have preferred to swing by and snatch the disks instead of perpetrating an elaborate hack. Thus, we can think of the hack as circumstantially motivated, not motivated by the to-be-hacked technology itself.

Assessments of circumstantially motivated hacking can typically proceed relatively straightforwardly according to usual methods of moral evaluation. This is the case whether we are talking about the large-scale, high-profile cases of hacking, or the smaller, more personally motivated cases we are likely to find advertised on Hacker’s List. It is natural to begin assessment by looking at the effect of the hacking on the organizations and individuals involved. In many cases, the disruptive effect will involve some sort of harm, and harm is usually morally undesirable. And, so, on the assumption that the hacking itself has no intrinsic value and no morally beneficial effects, the hacking will be morally impermissible. No big surprise here. Exceptions to this general pattern will be cases in which the disruptive effect of hacking is somehow desirable. For example, we might have a situation in which invading someone’s privacy would allow us to prevent some greater harm from befalling her. Or, in a very different sort of situation, it is conceivable that some harmful effects may themselves be valuable. Arguably, harms, which constitute retribution for past wrongs, may have positive moral value in this way. When the goals of hacking are positive in one of these ways, we can reach a reasonable ethical evaluation by asking whether those positive ends justify the means (i.e., any additional costs or drawbacks of the hacking itself).

Technologically motivated hacking

Sometimes the goal of a hacker or the hacker’s client is the disruption of the technology itself. We can say that this hacking is technologically motivated. Here are a few examples.

Suppose your native language is one that is not well supported on some device you own. And suppose your device restricts what software you can install on it. And, finally, suppose it is possible for a hacker to unencumber your device so that you can add the desired functionality. This was the situation a few years ago with iPhones and Chinese language input. Since Apple controls that distribution of software for the iOS platform, iPhone users have to hack (“jailbreak”) their phones in order to use certain software. Hacking of this nature is very common, but note that it is not hacking aimed directly at Apple or, for that matter, at any goal other than the alteration of the user’s own electronic device. So it counts as technologically motivated hacking.

We find a somewhat similar situation in the relationship between drivers and the computer systems in their automobiles. Practically everything in new cars—from the ignition to the brakes to the entertainment system—is computer controlled. One part of the computer system in modern cars is the event data recorder (EDR). An EDR collects a variety of data about the operation of a car. Much of the data collected—things like speed, acceleration, steering wheel angle, braking, whether the driver was wearing a seatbelt, etc.—is particularly useful for investigating the causes of accidents. But these data are just the tip of the iceberg. EDRs can keep track of any information from a car’s computers. For instance, the EDR in newer BMWs monitors the vehicle’s mechanical status and contacts the local dealership when the computer has determined, for instance, that you need an oil change. Now suppose you wanted to find out exactly what data your car was collecting about you. Or suppose you wanted your car not to collect so much data about you. It may make sense for you to hire a hacker for help.

It is not just for purposes of manipulating our personal electronic devices that a person might want to hire a hacker. Suppose you have an old e-mail account that contains thousands of important messages. And suppose you wish to discontinue using this account and cut ties with the organization that provides it. If the interface you’re given does not allow you to export the messages in bulk, then downloading and filing all that old e-mail is likely to be quite a burdensome task. However, you might be able to hire a hacker to find a way to bypass the usual interface and download all the e-mail messages at once. And this might be the most reasonable course of action for you.

How do these cases of technologically motivated hacking differ from the sort of circumstantially motivated hacking discussed earlier? The clearest difference is, of course, the motive. The intended outcome of technologically motivated hacking does not essentially involve some sort of disruptive effect on persons or organizations, but rather just on the technology itself. Of course, computer systems do not pop into existence unbidden; there is always some party that designed, operates, or owns the system in question. But it’s not clear that this is always relevant from the moral perspective. That’s simply because the effect on that party, even if the hacking is successful, may be negligible.

Adding a better input method for your phone, finding out what your car knows about you, exporting e-mails from an account before closing it—the effects of these things on the technology providers has a level of significance that rounds down to zero. But, to the users, these things may matter a lot. And as digital technology becomes ubiquitous, playing a more and more pervasive role in our lives, we can expect to find ourselves in many more situations like these.

The issues here are complicated. I am not claiming that all, or even most, cases of technologically motivated hacking are morally justifiable (though I am tempted to think that many are). Instead, I simply want to guard against a certain sort of systematic mistake we could be making in our evaluations of these cases. If we fail to distinguish technologically motivated hacking from hacking that is circumstantially motivated, we run the risk of assuming that the entity that owns or operates the relevant computer system is more important than it is. In many cases of hired hacking, the decisive factor in the ethical evaluation is not the relationship between the user (who hired the hacker) and the technology provider, but rather the relationship between the user and the technology itself. In many cases, this latter relationship may constitute sufficient grounds to justify hiring a hacker.

 

Owen King

Owen King is the NEWEL Postdoctoral Researcher in Ethics, Well-Being, and Data Science in the Department of Philosophy at the University of Twente. His research is primarily focused on well-being, from both theoretical and practical perspectives.  He also investigates ethical issues raised by new computing and data technologies.

The Continuing Saga of the Self Destructive Tweet

 

Twitter is ubiquitous to reporters, editors and public-relations managers these days with most expected to use this social media tool to spread the word of their work and their employer’s brand.

But for a medium that’s been around since 2006, many journalists and those in related professions still act surprised when one of their tweets goes horribly, grotesquely wrong. Tweets that are borderline sexist, racist or offensive in a variety of ways are still going viral with alarming regularity. It makes one ask: Why are people who claim to be professional writers and communicators still misusing Twitter and what, if anything, can be done about it?

In fact, one might argue that even an essay on the digital ethics of tweeting is rather 2006. But if you Google “media Twitter mistakes,” you’ll find multiple news stories about reporters and PR gurus losing their jobs and clients after they abused the 140 characters Twitter allotted them. That’s a lot of poor judgment in such a small space.

A February New York Times magazine article about the psychological effects of a poorly worded tweet should serve as a reminder to everyone who works with words for a living – words in this modern era have the power of sticks and stones. And they will hurt you.

In Jon Ronson’s piece, “How One Stupid Tweet Blew Up Justine Sacco’s Life,” he takes a hard look at how “Twitter shaming” gives those who author these tweets long-term damage, whether it is through documentable syndromes such as post-traumatic stress disorder or just plain old shame.

Ronson, a self-professed Twitter shamer, goes out and finds people whose lives have been negatively impacted by their social media posts. One was Justine Sacco, a public-relations manager whose December 2013 tweet “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” cost Sacco her job. It also turned her into a media hermit. There also are worries that she and those like her may never know privacy again given the public nature of their mistake.

Chances are if you ask reporters if they’ve posted something that could be called questionable or downright stupid on social media – Twitter, specifically – you’ll get a hair-raising tale of the moment they wrote something they regretted.

The story probably would go something like this: There was a news story that they were either working on or caught their eye. Their wicked sense of humor kicked in, and they thought of a comment that made them snicker internally. So, 140 characters later, they decided to share their cleverness with the world.

The reaction was likely swift. Apologies followed. Job security was questioned. Who knew that such a seemingly innocuous thing as riffing off of a news story could offend, upset, irritate and downright annoy so many people in such a short amount of time?

Newsroom and social media policies reflect the sentiment that employers and organizations of all sizes are aware of the risks of irresponsible tweets. These policies tend to read the same: Think about your audience. Ask yourself whether you’d want your boss, your co-workers, even your Mom to read that tweet. And, if you’re in doubt, come ask.

Because editors don’t critique what gets tweeted most of the time, there is little to no oversight on what a reporter writes there. And with most newsrooms squeezed for budget and staffing, few would have the personnel necessary to do that. Having someone monitor every tweet would simply not make sense in a lot of instances. So an essential part of the news-writing process has been lost; there is no gatekeeper between the reporter and his or her audience.

It’s only after the tweet has gone live and people start to comment that things get real. The reaction can happen immediately or it might take a day or two. But once someone has captured a screenshot of the offending tweet, your goose – for all intents and purposes – is well and truly cooked.

I know that I cringe when I see one of my fellow journalists getting piled on after tweeting something inappropriate. That happened a year ago to a Detroit Free Press reporter who tweeted about a chemical spill that left thousands of West Virginians trying to make do without tap water. She wrote: “West Virginia has its tainted water problem under ctrl. Now, it can work on incest.”

Yikes. Probably should have told that one to your significant other and no one else. Incest jokes tend to have a limited audience in general. And she was tweeting as a representative of her newspaper. To have dozens, if not hundreds of people sending you their responses on Twitter in a flash, and they’re all hating you…It can’t feel good.

This reporter did not lose her job, and a quick look at her Twitter feed a year later shows that she still has a fine sense of humor about her. Her tweets when viewed through the lens of “What did she learn?” definitely have a milder feel to them in the months that followed the January 2014 incident. I’m guessing she understood that Twitter, although fun and frivolous in some instances, was more than she ever imagined.

So what’s the takeaway of Twitter and these moments? It’s that social media platforms are powerful beyond what their creators or their users initially imagined. It can change people’s lives – and that’s no joke. Granted, if you lose your job because you’ve tweeted something so offensive that it upsets legions of readers, you know that already. But it bears repeating: Social media use isn’t just about sharing kitten pictures or criticizing Oscar fashion any more.

Edward Cardenas is a communicator who has worked on both sides of the notebook throughout Metro Detroit. Cardenas was a reporter for The Detroit News when I met him; he left for a stint as communications director for Congresswoman Candice Miller and press secretary for Detroit Mayor Dave Bing.

He returned to the writing side of journalism to help launch the hyper-local Patch network in Michigan. Cardenas then went high tech when he joined WWJ Newsradio 950 and CBS Detroit in 2014, where he covers technology, development and new-economy stories. In my opinion, he’s pretty much a Twitter expert; using his posts to both boost his stories and the people in them.

“I have found that the best way to utilize Twitter and to build my reputation is to share newsworthy stories, photos and news updates on the social networking site that would be of interest to my followers,” Cardenas said. “Because I am a reporter, I rarely interject my personal opinion of links I share and leave it up to my followers to develop their own opinions about a topic.”

I can understand the desire to keep your own opinions and personality out of your Twitter feed. I remember my own Twitter meltdown with a sense of horror. I was sending out tweets for one of my freelance clients during the “Car Prom,” or charity preview for the North American International Auto Show.

While retweeting pictures of lovely gowns and car models, a random stranger decided to engage me in a Twitter chat. It seemed innocent enough at first, and I enjoyed the banter as I worked. It quickly turned problematic when he started adding references to pornography and drawing in adult-film stars to our conversation. I backed out quickly, realizing that my employer would be mortified to have the company’s name associated with any film star, let alone Jenna Jameson or the like.

No one ever brought up those tweets, and I consider myself lucky for avoiding any spotlight either within my office or across the Internet. I know of another friend whose son accidentally tweeted his “Angry Birds” score on his employer’s Twitter feed – and the tweet ended up making Jim Romenesko’s well-read media blog. He recalled the story to me with a small smile on his lips, but we both agreed that is not the way you want to end up in any blog post.

Perhaps some reporters feel hampered by strict Twitter policies. Maybe they think that adding, “tweets are my own opinion” to their Twitter descriptions will cover their bases in case something blows up when they don’t expect it. The bottom line is if you call yourself a professional writer, you must take responsibility for every word that you write. There are no exceptions. Twitter, like everything else, will follow you to your virtual grave. And don’t write anything you wouldn’t want your mother, your editor or your potential audience to read.

 

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

It’s an Ad, It’s News… It’s Native Advertising

 

On the week of November 20th, 2014, The New York Times ran a story called “Cities Energized: The Urban Transition.” From its title, one might assume the piece concerns a notable shift in how cities consume and/or generate energy. After all, it’s a vital issue—the kind that a paper such as The New York Times might cover. President Obama himself has been calling for a decided shift to renewable resources throughout his presidency. Though in his 2014 State of the Union speech, the president celebrated green energy initiatives while praising the oil boom of the preceding year in nearly the same breath. It’s a divisive topic. But one would hope, by virtue of the story’s placement in the paper, that the author strived for objective journalism. Furthermore, one would hope that the editorial staff aimed towards poignancy and relevance with the selection of the piece. Lastly, one would expect that the piece offered a critical perspective on such an issue. As it turns out, one would be categorically wrong about all of these things.

Despite all traditional indications to the contrary, The New York Times was paid by Shell to place “Cities Energized” in its paper. The piece is, definitively, a piece of “native advertising.” Though Shell is only mentioned a handful of times throughout, the story’s placement is all part of a focused effort to depict the company as a leader in energy. Notably, the Shell ad marked the first time in the history of The New York Times that a piece of native advertising has been featured in print. While the paper has increasingly experimented with native ads in its online format, this recent move seems to indicate a strategic shift, both for the paper, and perhaps for the advertising industry as a whole. Why now? Meredith Levien, EVP of adverting at the Times, explained that while advertisers had expressed interest in offering native print ads in the past, it was determined that the ads weren’t befitting of the paper.

Defining the practice has proven tricky. Solve Media made a valiant effort: “Native advertising refers to a specific mode of monetization that aims to augment user experience by providing value through relevant content delivered in-stream”. But this is a definition based on rhetorical posturing and advertising jargon. In a Huffington Post op-ed, Fahad Khan offers a broader, yet considerably more tempered definition: “Native ads are ads in a format that is native to the platform on which they are run, bought or sold. Native advertising is the activity of producing, buying and selling native ads.”

The question is ultimately contextual. By definition, native advertising appears alongside editorial content in a clandestine fashion. Native advertising mimics the form and function of the platform on which it appears. Advertising content can be viewed as “native” when it appears alongside other content and media that appears on said platform, hence the term. But editorial content, it is not. An important ethical distinction between the two involves the exchange of money—more precisely, which way said money is directed. Editorial content is generally an expenditure of the entity that produces it. Creating content costs time, money, manpower and other resources. Conversely, advertorial content is another stream of revenue for said entity. Another party supplies the work—a party with the financial means to buy editorial space in a paper such as The New York Times.

The Shell piece received quite a bit of coverage in ad industry publications. Moreover, the practice as a whole has generally been heralded by industry insiders as the latest and greatest innovation in marketing communications. Likewise, The New York Times piece received praise of this nature. But despite the hype, it’s unclear if native advertising delivers a worthwhile return on investment. As such, the practice hasn’t really taken off quite yet. Primarily, the analytics needed to measure its success haven’t been fully developed. Strategists also haven’t been able to hone audience targeting enough to make native placements truly worth it over other forms of digital advertising. But there are some major signs that mark the practice’s rise.

Take Forbes’s Brand Voice, for example. Brand Voice is a platform for company-sponsored long-form content. Last year, the platform accounted for 30 percent of Forbes’s total ad revenue. And as the hype of native advertising wears off, efforts to measure its effectiveness will become more reliable. Selina Petosa from Ad Age predicts “an expansive shift toward native long-form content in the years ahead”. But of course, while many questions about the effectiveness of native advertising have yet to be answered, the medium’s viability is not precisely relevant to its ethical validity.

When it comes to understanding the emergence of native advertising, it helps to have some historical context. Modern advertising was created from innovative techniques introduced by the tobacco industry in the 1920s, particularly with the campaigns of Edward Bernays, a man widely considered the founder of “Madison Avenue” style advertising. He is also known as the father of public relations. The “Madison Avenue” style is typified by creative use of language and graphics employed specifically to manipulate people’s emotions on a mass scale, usually for the purpose of promoting a product or service. Bernays drew heavily from the works of his uncle, Sigmund Freud, whose psychoanalytic theory provided Bernays with a framework for his methods. He was also influenced by crowd psychology, a fledging field of study at the time. While Bernays held the position that “herd instinct” had caused a dangerous tendency in society to be prone to manipulation, he also maintained that such manipulation was necessary. By understanding how this group behavior worked, Bernays hypothesized that one could manipulate people without their conscious knowledge of what’s happening. Such is the mentality of the modern school of advertising. With this in mind, it’s really quite obvious that native advertising isn’t a trade anomaly. The practice is the logical ends of such a mentality.

One essential factor in the success of a native ad piece is its ability to be indistinguishable from the editorial content with which it is featured. While The New York Times piece conceivably passed in this capacity, an earlier ad featured in the Washington Post arguably missed the mark. In fact, Kevin Getzel, the paper’s chief financial officer, made the point that the piece was “designed and labeled to be differentiated from the newsroom-generated journalism on the page, hence the coloration and slugging”. But this brings up an interesting, albeit admittedly semantic point. Is an ad truly native if it is clearly distinct from the editorial content? Does the mere virtue of its advertorial nature make the content invalid from an editorial perspective? While—yes—The Washington Post ad is clearly labeled, it occupies space formerly reserved for editorial content, thereby undermining the entire reason the newspaper exists in the first place.

While native advertising may mark an important advancement in the way we use communication media to market products, it also may in turn be the death knell of journalism as well know it. One should note that the Times and Post pieces are just the initial ventures of major newspapers in the business of native. If they prove successful, it’s a given that more and more companies will get in the game. An arms race in the native playground may lead to an increase in the quality of advertorial content. Though this point may be lost upon old fashioned, honest-to-goodness journalists, whose editorial space will surely dwindle if native becomes mainstream. But perhaps there is a threshold, beyond which consumers will not tolerate the overstepping of advertisements into editorial space. After all, people can only take so many ads before abandoning a product wholesale. As analytics evolve, this metric may present itself in interesting ways. It very well could be that native, if proven effective, will only serve to replace traditional forms of advertising rather than continuing to encroach upon editorial territory. It’s a slightly less bleak outcome, but still one which does not address the underlying problem.

Native ad champions tout this one essential perk of the practice as such: it’s a mode of advertising that does not interrupt the medium upon which it’s featured. The ad does not disrupt the user experience. Unfortunately, that very perk presents an ethical problem. While a newspaper’s primary function is to inform, the essential nature of the native advertising is to deceive. Its function is inherently deceptive. If a piece of native content is successful, the consumer is fooled about the work’s intent. So whereas the content’s function is to deceive, the intent is to promote a brand. Of course, consumers aren’t so easily fooled. Generally, one knows an advertisement when one sees it. There are numerous, rather blatant indicators that give it away. However, as the practice of native advertising becomes commonplace, advertisers will become more adept at masking the true nature of the content. The price, unfortunately, is a gradual undermining of editorial works, and sadly, journalism as a whole.

It is perhaps a sad reality in today’s socioeconomic environment that a reputable paper like The New York Times requires new streams of revenue that categorically undermine the editorial content, i.e. the very reason people buy it in the first place. But this doleful exchange isn’t anything new, nor is the controversy surrounding native advertising. In fact, the Federal Trade Commission settled its first case on the matter in 1917, in which an ad for a vacuum cleaner was presented as a favorable review. Furthermore, most of the major points against native advertising, at least in relation to journalism, are extensions of the same ideological points one can make against advertising in general. When we come to think about the ethics of native advertising, one inevitably reaches a paradoxical juncture: how do we reconcile a potentially hurtful practice, in which the manipulation of others is so easily lauded? If a certain level of deception is required to promote a brand or a product, is it really worth promoting?

 

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

Should Emoji Hold Up in Court?

 

A smiling pile of poop. A rainbow. A cat with hearts for eyes.

Do these belong in the courtroom?

Because they’ve been showing up there lately. Two months ago, 17-year-old Osiris Aristy was arrested for making a threat against the New York Police Department. His threat included a police officer emoji – a tiny cartoon symbol you can text someone; in this case, a man wearing a police cap – and three gun emoji. The Brooklyn resident posted them to his Facebook profile.

Of course, that wasn’t his entire message. Aristy also wrote “[Black man] run up on me, he gunna get blown down,” and, hours earlier, a photo of a gun with “feel like katxhin a body right now.” That was enough to get him charged with making a terroristic threat.

Aristy’s charges were dropped last month, but it’s pretty significant that he was charged at all. He’s not the first, either. Emoji have been used as evidence in a handful of recent court cases, raising the question, is it ethical to use these tiny, seemingly harmless cartoons as evidence? Especially since their meaning can be so murky?

Basically, do emoji count?

To answer that, let’s take a look at their prevalence, usage, and meaning.

First, emoji are definitely part of language. Currently, statistics about how many emoji have been texted are not available; however, on Twitter, people use them more often than hyphens, the number 5, or capital V. A dizzying real-time emoji tracker reports that the most popular emoji on Twitter, a face crying tears of joy, has been used more than 626 million times. Since there are some 720 emoji, total use on Twitter alone is probably in the hundreds of billions.

If their popularity alone isn’t proof enough that they’ve become part of the lexicon, well, ask officials at the Library of Congress and the Oxford English Dictionary. The former accepted a copy of Moby Dick made up completely of emoji in 2013. Two years earlier, the heart symbol became part of the Oxford English Dictionary, meaning “to heart” or “to love.” You can try to brush emoji off as fringe teen slang – their main home is on an iPhone screen, after all – but they’re increasingly becoming mainstream. Legal experts even say emoji are covered under our First Amendment right to freedom of speech.

So whether we like it or not, a cartoon pizza slice now counts as language.

Make no mistake, emoji are open to interpretation. For instance, using a winky face can be flirtatious, or at the end of a text that reads “I hate you,” express someone is joking. Some think the emoji of praying hands is actually a high five. You have to consider the relationship between the sender and the receiver, the context of the message, and typical use of the emoji itself. (A gun is less ambiguous than a wink.) Thanks to irony, sarcasm, and plain ol’ variations in usage, language is no straightforward thing.

So yes, it would be ridiculous to base an entire court case on emoji. As Wired writer Julia Greenberg writes, “None of these cases [that mentioned emoji] relied solely on the emoji, of course. Evidence, arrests, and prosecutions are far more complicated than that.”

But sometime soon, courts will have to answer Eli Hager, who asks on the criminal justice news site The Marshall Project, “[Are] emoji significant and unambiguous enough to be presented to the jury the same way the words are? Are some emoji significant, but others, not?” The gun emoji, for instance, seems especially incriminating and straightforward.

In the case of 22-year-old Christopher Levi Jackson, however, using the gun emoji a whopping 27 times wasn’t enough to get him charged with murder. A few hours after someone shot and killed 25-year-old Travis Mitchell, Jackson texted Mitchell’s sister, “It’s a chess game. I’m up two moves a head … try again. Bang bang, bang,” followed by 27 gun emoji. Detectives on the case believed that Jackson’s text meant Mitchell wasn’t the intended victim, and Mitchell planned to kill whoever was. Police arrested Mitchell for first-degree murder, but without further evidence, they had to release him.

And that’s how it should be. As D.C. attorney John Elwood told Buzzfeed, “Words that could be construed as threatening are enough to make an arrest, but they shouldn’t be enough to convict someone.” Emoji should be examined with as much context as possible, in light of the sender’s criminal record, past behavior, and other factors. Time will tell how much value juries place on them.

For now, the emoji is in its infancy; words are still our main units of language. You wouldn’t build a court case on body language, even though it’s a huge part of communication (55 percent, according to researcher Albert Mehrabian, who came up with the famous “93 percent of language is nonverbal” statistic). There isn’t a benchmark yet for emoji use in court. Lawyers can’t even agree on whether emoji should be read or shown in the courtroom.

In a recent court case, for example, the defendant’s lawyer argued that emoji should be included as evidence because they shed light on the rest of the message. The defendant’s lawyer not only asked the judge to include an emoji after a particular statement, but to show the emoji to the jury. In a letter to the judge, the lawyer argued that describing emoji aloud wasn’t sufficient because they “cannot be reliably or adequately conveyed orally.”

The bottom line is, just because you can say or text something doesn’t mean it’s free of consequences. Emoji are part of how we communicate, so they have repercussions. At the risk of sounding like an after-school special, think before you text.

After all, interpretation often matters more than intention. Do you really want to wind up in jail for threatening arson because you used 12 angry faces and 16 fire emoji? Didn’t think so. Because even if you’re  , jail is .

 


Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.

Mobile Payment Apps and the Price of Convenience

 

Wallets will be a relic of the past if the forces behind mobile payment apps have their way. Services like Apple Pay, Samsung Pay, Google Wallet, and seemingly countless others aim to nullify the need for physical credit and debit cards – the payment systems that made cash and checks obsolete just a couple of decades ago.

On the surface, paying for things with your phone seems convenient, but not absolutely necessary. It’s not really a hassle to pull out a credit card while waiting in line at the grocery store. But the smartphone already encompasses so many aspects of everyday life. Why not consolidate one more feature onto it?

Let’s walk through a hypothetical day: You’re out for a run in the morning, listening to music via headphones attached to the phone strapped to your bicep. You stop to purchase a cold water bottle at a convenience store. No need to store a card or cash in your waistband or tucked in your shoe. Tapping the phone against an electromagnetic reader at the register completes the purchase. Later, at work, you owe a coworker $20, but have no cash. But you don’t have to find an ATM during lunch: Instead, you send the money over Venmo, an app that’s hooked to your bank account. The colleague receives the payment instantly. That night, you buy your mom a birthday present from an online store. Instead of fishing out your credit card to type in its number, expiration date, and so on, you click one button that’s already hooked to your Google Wallet account. Done.

Acknowledging the potential convenience of mobile payments is relatively easy: They’re fast and they prevent you from having to carry around as much stuff. But there are ethical issues the industry needs to resolve before consumers will completely accept the concept, notably those related to accessibility and privacy: First, how easy is it to use these apps in the real world? Do they work anywhere, for anyone with any phone? Second, how do the companies behind these apps protect consumers’ private payment-related data? And how are they using that data for their own purposes?

Let’s focus on a major player in the mobile payments sphere, Google Wallet. The digital wallet app holds not only a person’s credit and debit card information, but also gift and loyalty cards. It can also be used to make purchases at online stores partnered with Google or to send money peer-to-peer. People make purchases in stores using near field communication, or NFC, a technology that uses electromagnetic induction to let a device communicate or exchange data with another device.

Some background: The Google Wallet FAQ page says that the app works “anywhere MasterCard PayPass is accepted,” which includes “millions of merchant locations in the United States.” Searching MasterCard’s locator for grocery stores within one Chicago Zip code reveals 42 locations that accept this payment method, including chains and independent mom-and-pops, but not every chain or mom-and-pop grocery store. A user would want to be familiar with the locations that accept the service and those that don’t, or else, carry around backup payment options – but there goes some of the convenience factor.

So Google Wallet does not work everywhere. But a bigger issue is that it doesn’t work for everyone. In fact, to use the app in stores users must have NFC-enabled Android phones – not the cheap ones or the older models. Consider CNET’s list of the best Android phones of 2015. The top-rated, NFC-capable Galaxy Note 4 retails for about $700. The “best budget” Android, the Motorola Moto G, costs about $200, but does not have NFC. To enjoy the benefits of Google Wallet, people must be both willing and able to buy an expensive phone that has the required technology.

If apps truly become the preferred mode for payment, those who won’t or can’t invest in pricier phones may be at a disadvantage – even locked out of certain stores if apps become not just the preferred option, but the only option they accept for purchases. Or, less drastic, maybe a shop offers special discounts to customers who use a payment app. Is it fair to link something as ubiquitous as shopping to a technology that is not attainable to all?

These scenarios are not likely to become realities anytime soon. In a 2014 Deloitte survey, just 7 percent of consumers in the United States said they had used their smartphone to make an in-store payment. About half of the respondents said that they didn’t even know if their phone had NFC capabilities. Another survey, conducted by MEF, a global trade association focused on mobile commerce, reported that 79 percent of U.S. respondents were not comfortable sharing their personal information over an app and 35 percent said they don’t believe mobile payment systems are secure.

Despite these concerns, the security features of payment apps may actually be their primary perk. When paying with Google Wallet, a consumer’s stored credit and debit card data is not passed on to the merchant. This is a significant plus as cyberattacks that compromise customer accounts become more common. Google Wallet also provides fraud protection, lets users lock the app with a security PIN, and allows people to remotely disable the app should a phone be lost or stolen.

Consumers can rest assured that their personal information is probably safe from hackers and thieves. It’s not, however, safe from Google’s grasp. It’s well known that the company has the ability to monitor surfers' online behavior – site visits, locations, and so on. With Google Wallet, it has access to arguably some of the most personal of personal data: your finances.

What does the company do with the data it gathers from its payment app? Google Wallet’s privacy notice states that any information collected “is shared with our affiliates, meaning other companies owned and controlled by Google Inc.” Consumers do have the option to opt out of this automatic sharing within their privacy settings, but it’s not an obvious option. Google’s general privacy policy also applies to Wallet. Even with these documents, it’s not always clear how the company is using data they get from the app. For instance, where can a user find an exhaustive list of those Google affiliates?

Another mobile payment app, Apple Pay, has many of the same pros and cons as Google Wallet.

It requires NFC technology, which works only on the iPhone 6 and iPhone 6 Plus ($649 and $749 without a contract). But it also provides security benefits that supersede other traditional payment methods. Apple Pay even has a security feature that Google does not: A fingerprint identification sensor called Touch ID.

As for privacy, Apple Pay does not have a unique policy. Gathered data is treated like that from any other Apple product. The company’s overarching privacy policy, like Google’s, notes that personal information is used “to help us create, develop, operate, deliver, and improve our products, services, content and advertising, and for loss prevention and anti-fraud purposes.” It’s also shared with affiliates.

Ultimately, most people won’t make a conscious decision to choose Google Wallet over Apple Pay. Instead, it will come down to the phone and carrier they already have. Those who use neither Google nor Apple will soon have another option: In February, Samsung acquired LoopPay, which allows the smartphone maker to launch its own mobile payment system, one that also features a fingerprint reader. Samsung Pay will use a different kind of technology, known as Magnetic Secure Transmission, which lets digital wallets scan the magnetic stripe readers already on credit terminals.

All of this competition in the mobile payment sphere may be confusing for consumers, many of whom have not ever used one of these apps. But the competition among Apple, Google, Samsung, and many others is actually making mobile payment apps more secure as each company tries to one up the other in an area they believe is here to stay.

Is it inevitable that consumers will eventually embrace it, too? People already use their phones to send highly compromising selfies, to write sensitive business emails, and to bank online. Uploading all your credit card and checking account information to Google or Apple accounts doesn’t seem so dangerous in that light. Indeed, mobile payment transaction values doubled between 2012 and 2013, according to a report by eMarketer. It’s already a multibillion-dollar industry despite hesitance from some consumers and stores.

Last fall, CVS made headlines when it turned off NFC readers in its credit card terminals, disabling the ability for customers to use Apple Pay and other mobile payment apps at its stores. Instead, CVS joined a consortium of some 40 merchants led by Walmart called Merchant Customer Exchange to develop their own mobile payment app, CurrentC. While the service is accessible to anyone with a smartphone, it has its own set of issues.

For starters, CurrentC is hooked up to consumers’ bank accounts, not their credit cards, in order to allow companies to sidestep the 2 to 3 percent credit card processing fee that they have to pay, say, Visa and MasterCard when customers use those forms of payment. But will Walmart and CVS pass those savings onto their shoppers or minimum-wage employees?

Another problem for consumers is that these stores feature CurrentC exclusively, a move that seems self-serving. Walt Mossberg, of tech news site Re/code, summed it up by writing: “I simply believe that people who respect their customers and have faith in their own technology products should welcome competition, and that consumer choice should be a paramount value in retailing.” Customers seem to agree: Reviews in Apple’s App Store and in Google Play give the CurrentC app an average one star rating out of five, noting faulty security features and cumbersome usability.

It’s reasonable that other companies would want to enter the mobile payment app game – why should the big names win the whole market? Right now, they don’t. There are many other relatively well-known mobile payment apps, including PayPal, Venmo, LevelUp, Square, and store-specific ones like the Starbucks App, plus many less prevalent options. But even with all these choices, it’s not yet easy to forgo cash and plastic completely.

While mobile payment apps may be safer than using credit cards, ethical issues remain. They may perpetuate a dichotomy between those who can afford compatible phones, and those who can’t. The apps also carry the same privacy concerns that all digital products do: They enable companies to track deeply personal information. The details of how they are using this data now, and how they will use it later as the technology evolves, are unsettlingly unclear.

And even if every logistic and ethical issue is eventually reconciled, a smartphone can still break or run out of battery.

 

The Perils of Predicting Epidemics

 

Digital disease detection (DDD) has been gaining momentum over the last fifteen years. The internet has become a resource for clinicians and health officials looking for new ways to determine the strength and breadth of diseases and communicating this information to the general public. Increasingly, disease-related data is being dispersed and collected through both formal and informal channels, from chatrooms and blogs to web-search analyses. This shift to web-based information mining will fundamentally change how public health information is reported. Recent adopters of DDD hope this information will provide early warnings so that health precautions can be taken promptly. These alerts could potentially prevent epidemics and save more lives.

Like many digital technologies, the burgeoning field of DDD, also called infodemiology, is expanding rapidly. The assumption is that this new and more personal way of collecting health information and disease dispersal rates will result in a number of public health benefits. These include: enhanced detection timeliness, increased response time and health and safety readiness, and a reduction in fatalities. However, these supposed assets of DDD assume a profusion of ethical challenges in regard to privacy, accuracy, verification, and compliance.

Before addressing the ethical ramifications of DDD, we need a firm understanding of the process of collecting and disseminating data. There are several major platforms currently in use, with many more in the development and testing stages. ProMed mail, arguably the first and oldest DDD player, was established in 1994 as an email service to over 188 countries. It disseminates reports of disease outbreaks by retrieving online data from blogs, emails, and various online local news outlets. The World Health Organization’s (WHO’s) Global Public Health Intelligence Network (GPHIN) is a similar news-crawling site that was created soon after, in 1997. Its software allows GPHIN to collect data every 15 minutes. ProMED and GPHIN were instrumental in keeping health officials well informed during the 2002 SARS outbreak in China.

One of the most in-the-news DDD names, Google Flu Trends, uses aggregated Google search data to predict global trends of influenza cases. Famously, Google Flu overpredicted flu severity in 2009 when public confusion surrounding H1N1 caused faulty data generation. One newer system, a dengue tracker developed for Sri Lanka in 2013, relies on a computer model that combines information on current dengue cases, weather patterns, and mosquito data to predict the spread of the disease. The information, in the form of hotspot maps, is disseminated to public health workers and laypeople alike. This model encourages the public to report disease symptoms, mosquito breeding sites, and mosquito activity. Citizens can submit the information on cellphone friendly reporting forms, which also captures their location.

Once an individual has made a report, she receives a health alert tailored to her location that includes helpful information, which can easily be shared on Twitter, Facebook, and other social networks. Challenges identified by the creators of the program include poor participation because of a reluctance to share disease information; mislabeling dengue as a disease with similar symptoms; and reports being influenced by demographics, limiting the effectiveness of the data.

Most recently, researchers at Penn State University developed a system that uses Twitter streams along with medical records to see if they could correctly predict who had the flu. They found that of the people whose Twitter accounts they examined, about half discussed their illness on Twitter. For the rest, the team mined data to glean subtle clues about the Twitter users’ health status. For example, if they posted they were going to a party, they were less likely to be sick, but if their tweeting rate declined, there was a higher likelihood of illness. The authors of the study proved that by using tweets alone, they could predict the correct medical diagnosis 99 percent of the time. They are currently working out how to apply this method to the spread of HIV. The idea of using Twitter-based data mining methodology to track disease spread introduces the possibility of labeling and tracking individuals based on casual public comments. This type of tracking raises critical ethical questions due to the stigma of HIV and other sensitive diseases.

Issues arising from privacy standards in a globally shared model

One of the greatest concerns to present itself in DDD is the lack of a level playing field among all of the nations that use this data. Some countries have stricter privacy and transparency laws than others. How will this be reconciled among nations if the agency collecting and disseminating the data resides in a country that does not place a high value on protecting personal information? The difference between current uses for “big data” and DDD is that data for DDD is being used for a common, public good and not for the benefit of a corporation or individual—yet. As DDD technology becomes more viable, plenty of corporate players will evolve to capitalize on the trend, potentially taking it out of the nonprofit arena and placing it where there is an emphasis on capitalistic competition.

A holistic approach to these issues is needed. Initially, there should be a global governance system that ensures the privacy of all individuals whose data is used for public health reasons, as well as an emphasis on preserving the data for public health use only. Global governance would protect privacy throughout all areas of the world and preclude international corporations from using the data in ways other than initially intended. There should also be some level of participatory agreement or consent before data of a private nature is shared and participants' identity should be protected.

Issues arising from incorrect data

DDD relies on spatial analysis of cases in both data collection and outbreak reporting, but spatial event data has been found to be widely inaccurate due to the geocoding process. Spatial analysis entails monitoring cases within a given geographic space and looks for discernable patterns from which disease spread probability can be extrapolated. Geocoding supplements spatial data by matching geographic coordinates with an address, a postal code or other location identifier in order to pinpoint specific outbreak locations. The data accuracy varies based on the population density and data quality. Even a small number of errors of this nature—about 10 percent of the records being considered incorrectly geocoded—can result in incorrect disease distribution maps. In addition, when DDD entities concerned with privacy try to de-identify individuals when compiling data sets, there is a noted trend toward data inaccuracy. Both the technology involved with geocoding and that used to cloak individual identities within the mass of data need to be improved and perfected before we can be entirely reliant on DDD predictions and safe from privacy invasion.

Issues arising from incorrect and delayed outbreak notification

Incorrect data can lead to erroneous public notification or warning and unnecessary panic among the populace. If the error is not caught early in the process, it can result in a strain on the public health system as people use clinical personnel, medicines, and other resources unnecessarily. Disseminating incorrect information can also have a serious impact on the public’s perception of an industry and the government, which can result in disregard of subsequent notices. Failsafe and validation requirements need to be in place to prevent wasting public health resources on invalid predictions and to curtail widespread panic.

Not surprisingly, availability of data also influences the speed of information dissemination. Countries with a free press and a high proportion of internet users were able to get information on outbreaks to the public in a more timely manner, but even these countries experienced a 12-17 day lag between data collection and outbreak announcement. Partially free-press areas (17-24 days) and countries with no press freedom (24-37) experienced the greatest lags in outbreak reporting. The countries with the greatest lags also tended to embrace government-produced propaganda that clouds reporting of emerging outbreaks. Both lag-time and ambiguous reporting can result in unsuccessful or inaccurate information spread.

DDD has a lot of structure and regulation to undergo before it can be considered a real, rather than rogue, technology. Most people are familiar with the leper colonies of the past, where infected persons were shunned, and even harmed, for the risk they represented to society. A global communal entity that is consistently using personal information to track potentially life-threatening illnesses can identify and segregate sick individuals to their detriment. There must be a watchdog organization created to oversee the correct and private handling of personal data. There must also be significant oversight and monitoring, including checks and rechecks, before warnings and predictions are released publicly. Strict empirical control is needed to prevent false alarms and widespread panic that can threaten and endanger lives. In the event of a false alarm, there should be a standard plan available to world health officials that would allow them to subvert the damage and offer immediate remediation to areas negatively impacted. Finally, there should be transparency to consumers who are active online. They should be allowed to choose to participate in DDD programs and have options regarding consent for data usage. DDD represents a leap forward in the science of predicting and, ultimately, preventing local and global health disasters, but it should not come at the expense of privacy.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

The Right to Be Forgotten: A Choice Between Privacy and Free Speech?

 

Last month, a video emerged online of a University of Mississippi student biting off the head of a hamster during a spring break party. Soon enough, the young man was identified and has since withdrawn from the university, possibly facing animal cruelty charges. Should his alcohol-fueled spring break misdeed cause him embarrassment and woe decades later? Should future employers, lovers, family members or potential in-laws be able to dredge up this incident? Or does the young man have a right to be forgotten and define himself as someone else than that guy who bit off a hamster’s head?

While the European Union (EU) has embraced the view that individuals should have the ability to remove certain personal information from the internet for more than a decade, the United States has been slow to adopt the concept.

In May of 2014, a ruling by the European Court of Justice (ECJ) brought the topic to the forefront. The fact that the defendant was Google Spain, a subsidiary of Google Inc., a company based in the United States, brought the conversation close to home for Americans. In the ECJ case, the plaintiff, a Spanish gentleman whose home foreclosure (since reconciled) had been publicized online, asked for the removal of the content “since it was no longer relevant.” Previously, the Spanish Data Protection Agency had denied his request, determining that the content was legal and accurate. The ECJ disagreed, however, concluding that the data, while lawful, was “…inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed and in the light of the time that has elapsed.” Its decision concluded that search engines are responsible for the links they point to, which forced Google to comply with EU data privacy laws and remove links pointing to pages where the gentleman’s information exists. The ECJ further stipulated that Google was required to allow others to request information removal. For clarification, the official decision noted that future requests for removal could be denied if such information were justified because of a significant question of public safety or interest. That means that the data controller, in this case Google, would be required to check all inquiries for deletion against this test.

Wikipedia founder, Jimmy Wales—who is serving on a Google advisory committee that will help the data giant determine which removal requests will be allowed— is outspoken on his opinion of the legislation. He believes that allowing an individual to dictate what links are removable is wrong. He states: “In the case of truthful, non-defamatory information obtained legally, I think there is no possibility of any defensible right to censor what other people are saying. It is important to avoid language like data because we aren’t talking about data—we are talking about the suppression of knowledge.” In most cases, this “suppression of knowledge” is requested by individuals concerned with covering up nefarious activities. After Google’s request for removal form became available online, the company received over 12,000 requests for data removal in one day.” Since the form was published, over 250,000 links have been allowed to be removed. In a bold attempt to champion journalistic freedom in the face of at least some of these removed links, the British news source The Daily Telegraph is maintaining a list of Telegraph stories that have been removed from search results and making it available on its site.

Mr. Wales’ concerns about censorship are justified, as there seems to be plenty of gray area when determining which links should stay and which should go. According to the BBC, some of those first requests to be forgotten included a convicted pedophile’s appeal for links to pages regarding his conviction to be removed; a doctor who asked that links to negative patient reviews be deleted; and a politician running for re-election who wanted links to an article referencing his conduct in office deleted. Most people would agree that in the cases above, the information should not be removed from search results. But what about shocking, and far more juvenile offenses such as the hamster-biting fraternity boy mentioned earlier?

Lawyer and former Director of Global Public Policy at Google, Andrew McLaughlin, finds the ECJ ruling fear-inducing. In discussing it, he imagines a dystopian political system where the well-connected rewrite history to whitewash their misconduct while collecting armloads of personal information and surveillance on private individuals. This reliance on maintaining the status quo and discouraging individual thought was rife in the totalitarian governments of pre- and post-World War I Europe. Lack of privacy protection makes it easy for the elite or politically powerful to coerce individuals into silence. It also removes people’s ability to reference past events, a tactic used by dictators and authoritarians throughout history and part of the reason why the EU is so protective of privacy rights.

American hesitancy toward the right to be forgotten finds its basis in the dichotomy between privacy and freedom of speech. Our nation’s political and cultural background have predisposed us to be more concerned with protecting speech than privacy. However, there are certainly strong advocates of privacy protection in the U.S. They argue that in the past, adverse events in a person’s life would be expunged after a certain time period—criminal records would be sealed; credit reports cleared of bad debt; bankruptcies removed—so that individuals seeking to “start over” had the opportunity to move beyond earlier bad decisions or regrettable events.

Unfortunately, the Internet has ensured that if such events are posted online, they can be found via a simple search far beyond the period that the records have relevancy. This ability to store and retrieve data indefinitely can significantly impact the ability of people to recover, move forward, or rehabilitate their lives. What’s more, with advances in big data collection and data mining techniques, there is a possibility that every “Like” button you click on Facebook or every tweet or retweet can be aggregated and assembled to create a road map of your actions and opinions over time. Privacy protectionists try to win points with groups that advocate for free speech by claiming that disregard for protections provided by right to be forgotten legislation would be a deterrent to free speech. For example, knowing that links to search results for politically unpopular views will be cached for an indefinite amount of time could limit participation in political activism as people may worry about posting views that clash with the mainstream.

On the other hand, opponents of the right to be forgotten claim that this type of legislation allows discretionary censorship of individual information that may be important to consumers or individuals making professional or personal choices. This “cleaning” of data could preclude people from being able to protect themselves from fraud and personal injury when researching potential employees, service providers, friends, and even family. Visionary author George Orwell famously stated: “He who controls the past controls the future.” Allowing revisionists to “sanitize” events, reviews, and personal histories to reflect the reality that they want pushed to the forefront may not be in the best interest of the public. With the adoption of right to be forgotten legislation, the state could allow individuals to remove links to factual information for their benefit or the benefit of others that they designate. Even though the editing of information is self-directed, it still amounts to censorship.

Questions that need to be answered as our consideration of the right to be forgotten law continues include: How much time must pass before relevance is determined? Who is going to make the call in weighing public interest and individual rights against one another when examining requests for deletions? Finally, there is also the issue of compliance and its cost to companies and consumers. Marc Dautlich, a lawyer at Pinsent Masons, recognizes the difficulty in asking search engines to manage the status of hundreds of thousands of requests. He asks: “If they get an appreciable volume of requests what are they going to do? Set up an entire industry sifting through the paperwork?” It’s a good question and one that the courts should consider closely when contemplating legislation.

So which is the more important right to protect—freedom of speech or privacy? I suggest we need both. We need privacy so we can voice our opinions free from political or personal reprisals. We should be free to speak our minds without fear that it may be held against us in perpetuity. People grow, change, and mature in their personal and professional lives. Something said or done as a child or a young adult probably does not reflect the skills, thoughts or opinions of the same person twenty years later. We should not allow youthful indiscretions to be a blot on someone’s character forever. But we also need freedom from censorship in order to operate in a truly enfranchised society. For example, allowing professionals to cherry-pick and remove reviews from search histories or to eliminate evidence of incidents of wrongdoing is to suppress the lawful truth. Some behaviors come within the scope of public interest, and records regarding them should be retained for reasons of public safety. Another type of information, the kind that is gossipy, hateful, libelous, slanderous or defamatory and outdated is a different matter and individuals should be allowed to remove this type of data from public view.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

High-resolution satellites: are our privacy expectations too high?

 

As you read this, a satellite is orbiting above you with a camera sufficiently powerful to identify ‘manholes and mailboxes.’ It’s been hard at work for over six months, but companies like Google can now sell data from Digital Globe, a commercial vendor of space imagery. With a sharp new publishable resolution of just 31 cm (12 inches), there is the chance that, regardless of Google’s blurring of faces and registration marks, you could easily be recognized as you are snapped going about your daily business. Is this loss of privacy a reasonable price to pay for the benefits that such technological advances bring?

Imagery for the greater good

Advocates of the introduction of high-resolution satellites—and crucially of the legal amendments that enable the images to be shared—cite humanitarian and global benefits. There are very few developments in technology that do not advance the greater good in some way.

Our environment is a clear winner: Observations and inspections from space could help to protect crucial and fragile ecosystems and deepen our understanding of natural disasters or local weather anomalies. Disaster relief suddenly becomes swifter and more effective when up-to-date and detailed satellite imagery reveals changes in the terrain, specific locations to target, and the needs of a population, before implementing response efforts. In a study of the Caribbean, researchers from the University of Chester evaluated the use of high-resolution imagery in the understanding and control of landslides in tropical regions. The authors concluded that there is great potential to reduce loss of life, major economic losses, social disruption and damage to public and private properties: Quality satellite images allow professionals to identify areas of instability before landslides occur, additionally, the ability to map new landslides quickly and efficiently after a landslide disaster also improves the emergency response. Interestingly, the images are actually much cheaper and quicker to acquire than existing sources – an unexpected benefit.

The satellite data that builds understanding of the economic activity on our planet, the movement of human capital and the effect on our landscapes, towns, and cities, will be improved. In March 2015, the University of Minnesota became one of the first institutions to gain access to the Digital Basecamp, an online map and database of this current, high-resolution satellite imagery of the globe, through the DigitalGlobe Foundation. They will be taking this resource forward in teaching and research across disciplines, building our knowledge of the globe. Add to this tool the news of Google’s acquisition of Skybox satellites last year, and there is the real possibility of developing an ‘Earth Cloud’ – the equivalent of the planet’s Instagram feed, holding a mirror up to its daily life; a rich seam of big data ready to be mined.

The Privacy Dilemma

Such exciting possibilities: Personally, I’m looking forward to seeing my own backyard in higher resolution; I can just make out my kids’ bright yellow space hopper on the grass on Google Earth, and I’m disappointed when the screen flicks to an out-of-date (though higher resolution) Street View. This is borne of the same sense of wonder that has me staring out of plane windows trying to identify ground features, getting a sense of place as I ride above the earth. But what about the minutiae of my neighbours’ properties and daily activities, or the land management practices of the farmer whose fields start just across the street? Would they believe someone like me to be exhibiting natural curiosity or sinister voyeurism? Does the wider population have suspicions of the use to which this imagery could be put?

There is a growing body of research around personal feelings about satellite monitoring, but as with many subjective topics, declared opinion is not always borne out by behavior. According to research conducted at University College London, 58 percent of Australian farmers and 75 percent of U.K. farmers agreed that satellite monitoring could be an invasion of privacy. However, these are the same farmers whose European farm subsidy inspections are very likely to have been carried out using satellite data, speeding up the payment process and tightening compliance. In 2010, satellites conducted 70 percent of the total required controls on farm payments in the EU, and higher resolution images would remove the need for additional drone surveillance, which was under test at the same time. Although there may be an expression of resistance, expectation of monitoring changed behaviour, as UCL researcher Ray Purdy told the Sydney Morning Herald’s reporters at the time: “It is the expectation of being monitored that has improved compliance and reduced fraud levels in Britain.”

In common with most of us in this fast-moving digital world, British farmers are increasingly likely to store digital photos in the cloud, and share news with family and friends online through social media and other channels. Their movements are already tracked by some of the 1.85 million CCTV cameras in the U.K., with the average citizen caught on camera up to 70 times a day. This seems to indicate a conflict between the opinion that privacy is being compromised, and the willingness to embrace the benefits of the same technology in daily life. Should expectations of privacy move with the times, or will the introduction of higher resolution imagery throw the whole question into sharp focus?

In 2009, when Street View was launched in the U.K., protests of privacy breaches were made to the Information Commissioner’s Office, the U.K.’s privacy regulator. Google applies a blanket pixilation of facial features and registration marks, to the extent that even the faces of celebrities on billboards were obscured, but despite this, some individuals were easily identified. One person who had been rehoused to escape domestic violence was pictured outside her new home, and there were, as one might expect, a number of people caught in compromising circumstances. The complaints were not upheld. It’s true that since the initial U.K. ruling, Google’s Street View has fallen afoul of other privacy actions in the U.K., across the U.S., and in Korea, Australia, Italy, France, and other states: However, these rulings involve data which are not image-based, such as Wi-Fi network information, ‘eavesdropping’ claims, and control or deletion of collected data. The principles enshrined in U.K. case law for photographic complaints is clear: The focus of an image must be on the individual for a breach of privacy to occur. Satellite sweeps are not focused on one person; therefore it’s unlikely that, whatever your personal feelings may be, being recognizable on a satellite image constitutes any actionable invasion of privacy. However, with digital images available across the world, legal definitions of privacy can leave us feeling naked.

Privacy by intent: shades of grey

Our instincts are to classify that which we share willingly as public, and that which we choose not to share as private. Posting on social media is “public” (regardless of your privacy settings); saving images to the cloud is “private.” There is outrage when hackers publish things, which we feel, at a visceral level, to be private – whatever the social good that might come from leaks and scoops. The Wikileaks and Edward Snowden revelations, the Sony hacks, and other high-profile breaches sit firmly in this gray area: Information that was intended to be private has been shared, but sharing has changed bad practice and contributed to the common good. Which is morally right? How can we choose?

Our fundamental and declared discomfort over satellite imagery is rooted in this inherent black-and-white distinction. We have no control over what part of our lives may, as a result of this technology, become public. How, though, in an increasingly monitored world, where our lives are already played out in front of cameras with our knowledge and consent, can we justify the argument that our actions are private by intent? We are being hypocritical if we claim satellite evidence of our ‘private’ actions should not be published; we are being inconsiderate when their inadvertent exposure in a satellite sweep is part of a much bigger agenda, which could change our world for the better. Perhaps this is the time to realize that the world does not revolve around us, and bigger issues are at stake— and it is time to admit that the end justifies the means.

 

Kate Baucherel

Kate Baucherel BA(Hons) FCMA is a digital strategist specialising in emerging tech, particularly blockchain and distributed ledger technology. She is COO of City Web Consultants, working on the application of blockchain, AR/VR and machine learning, for blue chip clients in the UK and overseas. Kate’s first job was with an IBM business partner in Denver, back when the AS/400 was a really cool piece of hardware, and the World Wide Web didn’t exist. She has held senior technical and financial roles in businesses across multiple sectors and is a published author of non-fiction and sci-fi. Find out more at www.katebaucherel.com

Fraud in the Online Age

 

A headline in a recent New York Times Magazine The Ethicists column asked: "Can I Hire Someone to Write My Resume and Cover Letter?" What followed was an interesting debate about the legitimacy of paying for this kind of service.

To me, the most illuminating comment came from Jack Shafer, a media writer at Politico, who weighed in: "Is it unethical for you to help your child with their homework? Yes, of course, it’s ethical to help them. It’s not ethical — it’s not actually helpful — for you to complete your children’s homework. The letter writer has enlisted a sort of surrogate parent to help them complete their homework, and I’m not comfortable with it. I don’t think that behavior is ethical or representational of the skills of the applicant."

I agree. If the first thing a potential employer sees from you was written and edited by someone else, then they have no insight into many crucial skills you should possess. Many of these services work by asking clients to fill out online surveys about their professional and academic backgrounds along with details about the job they are applying for. That is often followed by a phone interview with the writer chosen to work with them, and resumes and cover letters are created using that information. But if I am hiring someone, I want to see how well they can organize information. I want to know if they have problems with spelling and grammar. And in their cover letter I definitely want a sense of how compelling and descriptive they can be in advocating for themselves for the position. And if the applicant hasn't written and edited their own resume and cover letter, I would have no idea if they possess any of those abilities.

One could argue that these kinds of services are less ethically questionable if they are being used by an applicant who is not applying for a job that requires a significant amount of writing or editing. I would counter that by saying that almost all jobs these days require people to be able to communicate professionally and properly – whether you are applying to be a nurse or a car salesperson. And if someone else’s work is the first example a prospective employer sees of your ability to communicate, then they are being misled.

In his response, Shafer touches on an issue that is significantly more ethically troubling than resume and cover letter writing services – having a third party do your homework for you. Many of us knew people in high school or college who were happy to do your assignments – for a fee. But the digital revolution has taken that concept and expanded it exponentially.

Granted, there are many legitimate online tutoring services where students can get the help they need when they are struggling in certain areas. Tutoring someone is certainly ethical – even commendable – and a number of big-city public libraries offer free online tutoring. But do a Google search with the phrase "homework for hire" and you'll see some businesses that are very up front about their seemingly unethical services.

Some of the website names are anything but subtle and make bold claims about what they can accomplish. One such site is paymetodoyourhomework.com. On the front page of their site they boast: "Our experts test on average 92% better then nationwide averages on SAT I & II, AP, GRE & LSAT, Exams, Finals." What's hilarious about that statement is that the word "then" is used incorrectly – it should be "than."

I found almost the exact same situation on NoNeedToStudy.com. This business boldly, and again, ungrammatically, states: "Your Homework Gets Completed By Our Genius. Or Moneyback!" One assumes their resident genius didn't proofread their Web page.

Why isn't anyone doing anything to stamp out these services? Actually, at least 16 states have laws against this kind of practice. A post on the popular crowdsourcing website Fiverr.com, details some of the legislation, including New York's legal stance on the issue: "Section 213-B of the New York Education Law criminalizes the unlawful sale of 'a dissertation, thesis, term paper, essay, report or other written assignment' as a Class B misdemeanor, punishable by incarceration of up to three months."

However, given the huge number of these businesses, even the threat of jail time seems to be doing very little to stop the proliferation of this kind of academic fraud.

But those digital homework services are not the worst ethical offenders. The most suspect and disturbing sites are the ones that will take your entire online classes for you. And with more and more people getting educated over the Internet, the fact that you could get an A in a course you never attended is frightening.

One such site is WeTakeYourClass.com, whose homepage states: "We take your online college classes for you and you get an 'A.'"

So when you combine all three of these services – resume/cover letter writing, homework completion and online class taking – the end result could be someone with great grades and a flawless resume whose actual knowledge and skills are anything but what they appear to be on paper. I agree that someone who uses an online resume/cover letter writing service is not as deceptive as someone who has their homework completed for them or has their online classes taken for them. But they are all part of a trend in which people are using the Internet to buy work that is not theirs and then pass it off as their own. They all might not be as ethically suspect, but when taken together they represent a very troubling trend.

I doubt these kinds of businesses are going away anytime soon. Actually, I can see them getting more and more sophisticated about avoiding detection as they become scrutinized more closely. So some of the onus for detecting this kind of fraud will fall on people who are interviewing job applicants. Personally, now that I know the extent of this industry, I will be much more vigilant about asking extremely detailed questions about people's purported education and skills.

 

John D. Thomas

John Thomas, the former editor of Playboy.com, has been a frequent contributor at the New York Times, Chicago Tribune and Playboy magazine.

The robots are coming! Ethical implications of robot journalism

 

Lamenting the incessant 24-hour media speculation over this year’s NBA MVP award, ESPN sports journalist Skip Bayless commented: “I know that’s the age we live in. It’s instantaneous news. We have to be ahead of it. We actually have to make the news before the news gets made.”

They’re no fortunetellers, but media executives are getting closer than ever to offering up-to-the-minute news. In the past few years, the Associated Press, Yahoo and other major news outlets have adopted new policies relegating a portion of their content to high-speed ‘robot journalism’ software. The revolutionary writing programs can produce articles at speeds of up to 2,000 stories per second. When it comes to pace of production and fact mining, writers and computers are worlds apart.

With new changes come new considerations. As they implement alternative technology, publishers need to re-evaluate their ethical writing standards and address the employment implications that automatic content generation will have on writers. The steps used to publish automated content as well as the roles traditional writers will play in the process will both have to be adjusted.

Writing software and traditional journalists find and apply data very differently. Robot journalism technology is undeniably more efficient at compiling and sifting through online information about current events. Quakebot, the geological writing platform used by the LA Times, automatically produces articles about tremors as soon as the U.S. Geological Survey issues notice of an earthquake that meets Quakebot’s minimum magnitude requirement.

Robot journalists can quickly generate articles by using algorithms that collect information from approved sources and plug them into templates. In the case of Quakebot, it’s safe to say that USGS data is typically accurate, but media outlets must remember that it is their ethical obligation to ensure that all sources are similarly trustworthy and their software is collecting the proper information. Ken Schwencke, the creator of Quakebot, admitted that his algorithm occasionally creates articles based on false alerts or technological problems that come up at the USGS. Making sure that editors review articles before they are published will allow the LA Times to provide consistently accurate information.

Robot-generated content is usually statistically rich and short, and it is often used to inform medium-sized or local audiences about financial earnings updates, real estate descriptions, sports recaps and geological news. Given the focus on numbers and facts offered by outside sources, publishers who use robot software should make sure that content is not only correct but also retrieved in ways that do not infringe upon copyright laws. Media outlets, such as the AP, have built a reputation of legitimacy, and that reputation cannot be taken for granted. This is particularly important in early stages of robot journalism, as most readers are still unfamiliar with the workings and use of the writing software.

Digital journalism offers multiplatform content at incomparable speeds, but media representatives should acknowledge that it also puts the employment of some writers at risk. I cannot help but feel insecure comparing portfolios with robots that produce millions of articles each week. As much as I’d like to think the connection writers share with readers – i.e., our ‘it’ factor – is wholly irreplaceable, I don’t. Evolving technology often results in job cuts, an unfortunate reality for employees in myriad fields. Hopefully, managers will do their best to soften the blow by shifting work responsibilities to fact checking and source evaluation.

For years, the writing landscape has been changing, and automated content plays a role in addressing the assignments of modern journalists. Talented writers shine by engaging their audience in analytical pieces, but they are stretched thin by modern demands for rapid content in the form of articles and social media updates. While robot-generated articles aren’t analytically or stylistically imposing, what they lack in panache they make up for in their ability to satisfy the need for speed that underlies contemporary media. Writers who are primarily responsible for smaller-market articles and social media commentary may soon find their job opportunities shrinking.

Competing interests are bound to arise between traditional journalism and media front offices that need to produce quick content without fraying financial resources. It is not unreasonable to think that the introduction of writing software will ultimately threaten the work of writers. The media industry has been facing financial hardships for years. According to the American Society of News Editors, 16,200 full-time newspaper newsroom jobs were lost between 2012 and 2013. Since then, job losses have continued for those working at the Tribune Co. and Time Inc., as reported by the Pew Research Center.

It is possible for publishing managers to alleviate tight budgets by employing increasingly sophisticated automated software. If you think that artificial writing technology with sufficient reader appeal is the stuff of sci-fi, keep an open mind. Robot journalism is not limited to simplistic plug-and-chug work. Although much of it is still focused on writing straightforward overviews and recaps, writing software is becoming adept at implementing natural language.

Article template algorithms can be adjusted to produce tones that appeal to its target audience. They can address readers with an academic or conversational style, and they can mimic human sentiments. When it comes to sports articles, for example, robot journalism software accumulates game data, finds statistical trends, and adjusts content to give off an empathetic or enthusiastic voice.

sample article used by GameChanger captures the idea with a sports recap opening statement:

Chris Andritsos came up big at the dish and on the bump, leading The Woodlands Highlanders Varsity to a 6-1 win over Austin Bowie on Friday at Mumford, TX. Chris racked up four RBIs on two hits for The Woodlands Highlanders Varsity. He homered in the first inning and doubled in the sixth inning. The Woodlands Highlanders Varsity got the win thanks in large part to Chris' dominant, 12-strikeout performance.

Not exactly a feat of creative genius, but wit and creativity don’t always top the agenda if quantity and speed are a priority – especially when it comes to addressing limited markets. The program did what it needed to. It produced a relatively enthusiastic overview, and a few minutes later, it fired off several more stories at speeds human employees cannot match.

The Big Ten Network and Forbes use Quill, a natural language generation (NLG) platform, to produce similarly straightforward work. The content includes full articles, summaries, tweets and statistics. Whether you look to preferred websites, apps or social media outlets to find the latest information, there’s a good chance you’ve come across its artificially produced content.

More sophisticated use of natural language software has also paved way for robot journalism to enter the world of book writing and poetry. That’s right, writing algorithms can produce long-form content and emotionally charged verse. This isn’t the software’s primary use, but the advanced nature of robot journalism has potential to evolve, leaving less established writers looking for work.

Addressing concerns about job security, AP vice president and managing editor Lou Ferrara told journalists that no jobs were lost as a result of robot software integration: “Automation was never about replacing jobs,” he explained. “It has always been about how we can best use the resources we have in a rapidly changing landscape and how we harness technology to run the best journalism company in the world.’’

Some professional journalists share Lou Ferrera’s optimism about the adoption of automated writing software. I’m tempted to call their bluff, but perhaps their conviction about job opportunities is heartfelt. There are optimists in every field.

For New York magazine writer Kevin Roose, the introduction of robot journalism is a welcome one. Roose is relieved that journalism software can be used to complete assignments most humans hate. He recalls dreading being pulled from his work to complete recaps of corporate earnings: “The stories were inevitably excruciatingly dull, and frankly, a robot probably could have done better with them,” he writes. Schwencke also finds the software to be harmless: “It’s supplemental,” he told Slate magazine. "The way I see it is, it doesn't eliminate anybody's job as much as it makes everybody's job more interesting."

Essentially, established writers view robot journalists as overqualified interns who don’t complain about the workload or pout about their edits. And if you’re already a seasoned writer, it’s not unreasonable to think that way. Experienced, highly talented journalists will always be needed for their analytical skills and personable writing.

As of now, writing software can’t pull off uniquely cerebral content or conduct interviews that give writers material for poignant profiles and cultural pieces. Robot journalism is nowhere near acing interactive work, but the digital landscape is filled with little-known writers whose work depends on the more monotonous part of writing. Media outlets should truthfully address the future of employees whose work is increasingly produced by algorithms. Optimally, they can utilize current writers to ensure that automated content maintains their standards of accuracy.

What is the role of behind-the-scenes writers in an increasingly automated media landscape, and how will publishers maintain their ethical responsibilities as content producers and employers? According to Automated Insights CEO Robbie Allen, the company’s writing platform is already the largest producer of narrative content from big data in the world. If executives are looking for numbers, then writers looking to contribute to the workforce face an uphill battle. But if employers continue to set a high bar for management and content, there may be a light at the end of the tunnel.

 

Paulina Haselhorst

Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.

The Internet of Things: A New Network With the Same Old Problems

 

As inconceivable as it may seem, one of the most important technological innovations of the new century began with a simple modification to a vending machine. In 1982, students at Carnegie Mellon University installed micro switches in a Coke machine in order to detect how many bottles were left in each of the machine’s six columns. The sensors were linked to the Computer Science department’s main computer, and a program was written in order for users to check the status of the machine. This allowed students to check the machine’s stock, as well as its functionality. The modification even gave students the ability to check how long each bottle had been in the machine, enabling them to gauge for optimal beverage temperatures. While this was a relatively simple implementation of the technology, the students at Carnegie Mellon offered proof that offsite monitoring of machines was an achievable feat, paving the way for new innovations to come.

The above example is the first noted employment of technology related to the Internet of Things (IoT), a massive network of objects embedded with electronic sensors that relay information. The term originated in 1999, when Kevin Ashton, executive director of the tech research group Auto-ID, claimed that he used the phrase in the title of a presentation made to Procter & Gamble; Procter & Gamble later became a major funder of the organization. That same year, Auto-ID went on to assist in the development of Electronic Product Code (EPC), a huge step towards enabling the IoT. EPC was intended to replace Universal Product Code (UPC), an identification system using barcodes to track and distinguish trade items. EPC system uses Radio Frequency Identification (RFID), a revolutionary technology that has become boundless in its application to infrastructure, transportation, healthcare, and countless other industries. While UPC allowed for distinctions by manufacturer, Auto-ID’s new EPC system offered the ability to assign each item with a unique identifier. As such, EPC has served as an essential building block for the Internet of Things by providing a system for the unique addresses to objects in a given network.

The IoT operates by enabling machines to communicate with users and, perhaps more critically, <em>one another</em> in order to improve efficiency and automation. For instance, IoT devices employed in an urban infrastructure can talk to one another, coordinating energy, transit, and other systems for optimal performance and efficiency without the direct aid of people. Because of this, IoT technology is largely expected to usher in an era of dramatically increased automation. The enhancement of data gathering and, perhaps more importantly, data visualization is a key factor to unlocking this potential. The IoT has the ability to turn the entire world into an information system—moreover, a system in which said information is accessible and increasingly tangible. A 2010 report from McKinsey Quarterly noted that IoT connected sensors “can give decision makers a heightened awareness of real-time events, particularly when the sensors are used with advanced display or visualization technologies.” The ability to make sense of ever complex information streams will prove to be a game changer across numerous industries.

ABI Research, a market intelligence firm, predicts that, by the year 2020, more than 30 billion devices will be wirelessly connected to the Internet of Things. The potential impact of this expansion cannot be overstated. The IoT is predicted to have wide ranging implications across many fields and industries. Healthcare, for instance, will experience radical innovations, not the least of which will involve biophysical monitoring. For instance, devices connected to the IoT can enable the remote monitoring of specialized implants, such as pacemakers. Aside from the practical applications such monitoring will have for both home-healthcare and the fitness industry, the data made available by such technology is also invaluable to researchers. Healthcare is just one field that will see rapid transformation with the ascent of the IoT; others include transportationmedia, and of course energy. While such predictions related to the IoT have been met with skepticism throughout the years, we may be on the brink of a substantial shift within the next decade—a shift that will find the IoT in near ubiquitous use. A recent questionnaire conducted by the Pew Research Center indicated as much; the study involved technology experts, analysts, and entrepreneurs. Participants were asked, “…will the Internet of Things have widespread and beneficial effects on the everyday lives of the public by 2025?” A majority responded in the affirmative.

Of course, the major innovations brought about by the IoT also present complex ethical problems. With the potential for data gathering presented by the IoT, privacy is of paramount concern. A recent piece by Charith Perera &amp; Co. concluded, “…existing technologies and regulations are not sufficient to support privacy guaranteed data management life cycle.” And, as with any major technological shift, security is a major concern, particularly with respect to potential vulnerabilities in IoT devices. A recent assessment from the National Institute of Standards and Technology stated that the manner in which the IoT operates essentially makes both public and private computers indefensible. In addition, while some IoT boosters espouse lofty gestures towards making the world a greener place, skeptics have argued that the increased use of interconnected devices will end up harming the environment—the reasoning being that IoT prevalence will drive the installation of sensors into everyday devices. So, because of increasing advances in technology, said devices will be discarded more frequently, leading to further waste. Furthermore, the lifetime of said devices is questionable. Standards and regulatory bodies are currently insufficient. As such, it’s increasingly difficult for consumers to gauge the true environmental impact of a product. The problem, again, is not the technology itself, but rather society’s distinct inability to keep up with the pace of technological progression.

From an ethical standpoint, concerns about the above mentioned issues are certainly valid and alarming. But there is an underlying problem behind all of these issues. Compared to the relatively rapid progression of technology, society as a whole has been slow to respond in many aspects of our socioeconomic systems, i.e. laws, regulatory bodies, standards, business models, political frameworks, etc. Humanity’s ability to adapt to changing environments and circumstances has always set us apart from other animals. It’s the reason we’ve been successful as a species. Furthermore, our ability to develop and use tools is irreconcilably connected to our ability to adapt. And now, paradoxically, these two fundamental aspects of our nature are at odds with one another.

Have the tools of humanity outpaced our ability to adapt to the new environments created by said tools? As of now, it’s unclear. But it does seem as though we are at a critical juncture. We can stumble backward into the 21st century, unready and perhaps unwilling to even think about changing. Or we can work to anticipate and address the potential ethical problems that will inevitably come with new advances. Luciano Floridi, Professor of Philosophy and Ethics of Information at the University of Oxford, comments on the matter with critical lucidity: “It’s up to us. Do not believe anybody who tells you that technology has its own forces, that you are powerless. It is not true—not true in the sense that it is our responsibility to make sure that the best that can happen with this technology is going to happen, and the worst is kept under control.” Floridi’s point is clear. New technology itself does not have an inherent moral value. As such, the development of IoT is not by itself good or bad; it’s what we do with the technology that matters. The need for a new kind of ethical vigilance is apparent. With each technological paradigm shift, we as a species must work to discern and diminish the potential downsides, all the while, working to propagate the best of what new technology can bring.

Since the IoT has become something of a sensation among business and tech circles, many an article have espoused both the great promise and great danger posed by the IoT. Real world headlines such as “The Internet of things is great until it blows up your house” and “Why the Internet of Things Heralds the Next Great Economic Disruption” are indeed attention grabbing, but do they reflect what is really significant about this great tech innovation? When we ask the question, as many news pieces have recently asked, “How will the Internet of Things make our lives better?” it is necessary to define the term “better” in concrete terms. Hyperbole runs rampant among these pieces, but one near ubiquitous theme is that the IoT has the potential to make our lives a lot easier from a practical standpoint. IoT connected refrigerators will be able to sense when we’re running short on items such as milk, and will subsequently add said items to our grocery lists. This may seem like a relatively mundane aspect of how the IoT can change lives; however, the underlying principle has wide-ranging applications.

What the IoT has the power to do is further eliminate our involvement in the everyday minutia of common tasks and problems. Moreover, the IoT enables us to make connections that were previously unheard of. Computers will be able to regulate traffic lights based on real-time information—e.g. congestion, weather conditions and accidents—making traffic patterns more efficient. This saves us both time, and potentially lives. Certainly, making roads safer has a concrete value, but what of saving time? Isn’t it what we do with this newly gained spare time that really matters? Because if the IoT only works towards making the public more complacent, that is a sharply dismal reflection of what we value. And ultimately, if our technology proves to be more than society can handle, it is purely reflective on our ethical integrity as a people.

What we hold dear is demonstrated candidly by how we use our devices. More than ever before, we have access to resources one could only dream of mere decades ago. The impetus for each stage in the history of people is, unequivocally, technological innovation. Along the way, technology has also fundamentally changed the way we experience the world. The relationship between humanity and technology is one of recursive development. We make the tech; the tech changes us. As we change, we develop new tech, which in turn changes us yet again—so on, and so forth. But do not mistake this change for progression, because it can go either way. New technology can be used for both great and horrible things. It is only with constant ethical vigilance that technology, such as that of the IoT, has the means to enrich the lives of everyone on the planet. With the guidance of ethical models that value—above all else—liberty, health, wellbeing, and environmental stability, the innovation brought about by massive technological shifts such as the rise of the IoT can serve to enhance our world. If such an ethical model guides its development, the IoT will completely alter the landscape of our world for the better. Yes, the IoT may usher in a new era of automation, but it can also usher in an unprecedented era of abundance and liberty. It is our responsibility to ensure that the innovations spurred by the IoT have positive consequences for the entirety of the human race, across class and creed. Where lives can be saved by this technology, where suffering can be minimized, we have a moral obligation to insure its safe implementation.

 

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at dstock3@gmail.com, and his URL is http://davidstockdale.tumblr.com/.

 

Getting Personal: Editorial Judgment in the Digital Public Square

 

Donald Trump, while trashing Senator Lindsey Graham on live TV, gives out Graham’s private cell phone number. He repeats it for emphasis.

Gawker Media publishes graphic details of a proposed hookup between a relatively unknown male media CFO and a gay porn actor. Gawker identifies the married exec.

If you’re eager to learn the Senator’s number or the CFO’s name, you can surf around and find them online. But the links I’m sharing in this essay aren’t your fast path. I’ve chosen stories that covered the issues but downplay those details -- like this CNN piece that bleeps Trump’s voice as he broadcasts Graham’s digits.

What is this, some kind of censorship?

Nope. That’s too easy an answer, one that doesn’t require much critical thinking. Calling this censorship presumes the following: When it comes to information, anything goes. If it’s true, it’s always newsworthy. If it’s out there anywhere, it’s fair game. If publication causes pain, it’s not a problem. If you choose not to share it, you’re a censor.

Or maybe you’re an editor. Using editorial judgment.

Editorial judgment is hard work. It starts with a desire to be anything but a censor: to lean toward revealing rather than concealing. To fight for access to information while bringing rigor to decisions about its use.

Editorial judgment may be the hallmark of thoughtful media organizations, but they have no lock on it. It can easily flourish in the ranks of free-lancers, bloggers and social media practitioners. Why not?

Editorial judgment is a process that starts with the question: What do we stand for? The answers may be:

- Accuracy
- Context
- Fairness
- Independence
- Transparency
- Holding powerful people and organizations accountable
- Minimizing harm while telling truths

They exist alongside less lofty-sounding but very real values like:

- Beating the competition
- Building the brand
- Attracting target audience
- Maximizing profits

Editorial judgment is an inevitable juggling act among competing values and the outcomes they may drive:

If you take time to double-check information in breaking news, you risk being scooped. If you report on a home team’s recruiting scandal, you may alienate the local fans. If you share real-time info on police converging on a hostage situation, you may tip off the hostage taker. If you blow the whistle on a bad business, it might cancel its advertising. If you show the body of a child killed by a drunk driver, you may add to the anguish of a heartbroken family.

Editorial judgment demands a prioritizing of your values, along with a willingness to “show your math” by revealing your decision-making process.

Editorial judgment also takes into account the rule of law, but what’s perfectly legal may still be unethical.

It’s not illegal to reveal or broadcast Lindsey Graham’s personal cell phone info.

Still, CNN chose to bleep. Why? And why did other publications leave the number out of their coverage of the incident?

Graham’s a public figure. He chose the limelight, right?

The answer lies in the values that guide a media outlet: Readily publish the office numbers of elected officials, but set a higher bar for releasing their home information. The value equation looks like this: Privacy and safety outweigh everyday curiosity. There’s got to be more heft on the side of the public interest before the balance shifts.

And what about the Gawker case? The media exec at the center of the story wasn’t a well-known person. He wasn’t a politician who’d campaigned on morality while practicing hypocrisy. He wasn’t breaking a law. But to the editors behind it, the gay escort story fit Gawker’s longstanding values of extra-strength snark and scandal.

Until it didn’t.

A diverse universe of readers and advertisers rebelled this time. The tabloid tale may have been as true as it was lurid, but they saw it as a vicious and pointless outing of a private man.

CEO Nick Denton removed the story from the web, a rare and controversial action in digital publishing that some view as business-side censorship-after-the-fact. Two editors quit in protest.

In defending his decision, Denton condemned the story and declared that Gawker’s values have now changed – along with those of the audience and advertisers. And to be sure, he placed high emphasis on those advertisers.

In this case, values seemed to pop up and down, only to get batted around like a crazed game of ethical Whac-a-Mole. Fairness: Wham! Privacy: Bam! Independence: Slam!

It was a game in which everyone lost. The subject of the story. Its highly criticized writer. The now-unemployed editors. The publication itself.

Gawker’s wounds were self-inflicted. They didn’t have to happen.

With editorial judgment built on a strong, shared foundation of values, editors could have searched beyond the prurience for a public policy issue, a system failure, or an abuse of power. Finding none, they would have bleeped the piece before it was born.

There will always be Trump-like antics and stories that reek of sleaze. The answer isn’t to recoil; it’s to reflect.

What’s our process for verifying, putting facts into context, checking for (or revealing) our biases, prioritizing our values, and sharing our reasoning?

In today’s digital reality, we’re all publishers. So let me get personal:

What do you stand for?

 

Jill Geisler

Jill Geisler is the inaugural Bill Plante Chair in Leadership and Media Integrity, a position designed to connect Loyola’s School of Communication with the needs and interests of contemporary and aspiring media professionals worldwide.  Geisler is known internationally as a teacher and coach of leaders in media and beyond.  @JillGeisler 

Navigating the Ethics of Newsgathering Drones

 

Unmanned Aircraft Systems (UAS), more commonly known as drones, have been in the news for years now because of their military applications. They also became somewhat of a punch line when Amazon announced that they wanted to use them as a way to deliver packages (and they still do). Other businesses, including real estate firms, want to employ the aerial technology to capture more compelling video and images of their properties. News organizations are also clamoring to use drones in their reporting, but many people believe that the unmanned flying vehicles could have a major impact on the future of journalism, including significant ethical implications.

The Federal Aviation Administration regulates the commercial use of drones in the United States, and their rules and requirements are quite restrictive. To operate one for commercial use, you have to have a pilot's license; you have to receive a permit from the FAA, which CNN Money recently reported is "difficult to obtain.” A drone pilot can only operate the vehicle if it remains in his or her line of sight; the UAS can only be operated in daylight; it has to weigh 55 pounds or less; and it can only fly in unpopulated areas.

But FAA officials understand that the devices will sooner or later be a regular part of covering the news, and they are committed to working out issues associated with their use. In January, the FAA began collaborating with CNN to establish best practices for newsgathering using drones. According to a CNN article, the network is partnering with researchers from the Georgia Institute of Technology to perform tests and gather data that will then be used by the FAA to help the agency refine its regulations regarding drones. In the article, CNN Senior Vice President of Legal David Vigilante explained, "Our aim is to get beyond hobby-grade equipment and to establish what options are available and workable to produce high quality video journalism.”

What's ironic is that internationally the use of drones to videotape news events is fairly common, even being used by American news organizations. For example, an article earlier this year published on the International Journalists' Network reported that producers for CBS’ “60 Minutes” employed drones to document the areas around Chernobyl three decades after the nuclear disaster there.

Given the inevitability of using drones to gather news, many people have begun to debate the practice's ethical implications. One of the major players on that front is Matthew Schroyer, an Urbana, Illinois-based journalist, drone designer, teacher and the founder of the Professional Society of Drone Journalists (PSDJ). He also heads a program called "Drones for Schools," funded by a National Science Foundation Grant that teaches high school students about drone technology.

One of the most interesting things to come out of the PSDJ is its Code of Ethics for Drone Journalists. The beginning of the document reads: "A code of ethics for drone journalists should be viewed as a layer of additional ethical considerations atop the traditional professional and ethical expectations of a journalist in the 21st century." Why is that extra layer needed? Primarily because of safety concerns related to a possible drone crash that could injure innocent bystanders. But that's not the only ethical issue related to using drones in journalism, and the PSDJ's Code of Ethics lists five major areas of concern: newsworthiness; safety; sanctity of law and public spaces; privacy; and traditional journalism ethics.

Most of what is contained in the organization's ethical code is sound, and the group admits that it is a work in progress and is soliciting input from others. That's a good thing because it doesn't address some of the ramifications that drones could have on the overall future of the profession of journalism. Making sure drone pilots are well trained and respecting the public's right to privacy are the PSDJ code's core tenets, but some of the more interesting ethical issues are much more nuanced.

A major concern is with news organizations using footage posted on social media that was taken by drones piloted by untrained non-journalists. Today, major news organizations consistently use images and video they get from social media in their reporting, and because more and more regular citizens are buying and using drones there will naturally be more of this kind of footage available online. If news organizations use this material, then it will just encourage more people to produce it, and we could end up with a sky full of drones piloted by amateurs at every major news event. This fire season in Southern California, helicopters and planes have at times been unable to help battle blazes because there were amateur drones in the sky above the fire. If news organizations use footage produced by drones in these circumstances it will only exacerbate this problem and lives could be at risk. One way to avoid this issue would be for major news organizations agree to limit the use of footage produced by drones not operated by journalists.

And while the PSDJ's code makes some excellent points about privacy issues, its biggest concern is with the privacy of non-public figures. The code states: "The drone must be operated in a fashion that does not needlessly compromise the privacy of non-public figures. If at all possible, record only images of activities in public spaces, and censor or redact images of private individuals in private spaces that occur beyond the scope of the investigation."

While that makes sense, the larger issue is the fact that drones are going to revolutionize the coverage of public figures. If you think paparazzi are a menace now, imagine a world in which a drone hovers above a celebrity’s backyard fence taking footage of them sunbathing or playing with their children. You actually don’t have to imagine it because it’s already happening. People are calling these intrusive UAS “drone-arazzis,” and a recent article in the Daily Mail reported: “A growing fleet of these machines is forcing celebrities to run for cover even inside their own homes, as cameras swoop in above swimming pools, tennis courts and balconies. Their lenses can even peer through open windows into bedrooms.”

Again, the only way to combat this would be for news organizations to completely avoid using drones in this fashion, and for them not to use any footage offered to them containing compromising images of public figures who are unaware that they are being filmed.

Another tricky ethical issue associated with drone use for news purposes is that drones could be used to reduce in-person coverage of events. What's to stop a TV network from just sending a drone to a breaking event instead of a reporter? The drone would get better footage, much more quickly, and that dynamic could lead to the elimination of jobs for journalists who report from the field. The ethical issue here is that while a drone might get a better visual perspective of an event, a reporter on the ground can gather information and make critical judgments about what he or she is seeing, hearing and feeling in a way that no drone ever could. Increased use of drones in that capacity could seriously threaten the incredibly important value that having a trained journalist on the scene provides.

In the near future, drones will become a major asset when it comes to gathering important footage that journalists can use to inform the public concerning breaking news, especially in situations like disasters where sending in reporters could put people in harm's way. That said, unless these machines are used ethically by news organizations, we risk creating a world in which the benefits they offer could be far outweighed by the compromises they create.

 

John Thomas

John Thomas, the former editor of Playboy.com, has been a frequent contributor at the New York Times, Chicago Tribune and Playboy magazine.

Machines, autonomy, and virtue: Highlights from the 2015 CEPE/IACAP conference

 

In June, the International Association for Computing and Philosophy (IACAP) and the International Society for Ethics and Information Technology (INSEIT) held a joint conference. Bringing together members of both organizations, this conference served as IACAP’s annual meeting, as well as INSEIT’s annual conference, referred to as Computing Ethics: Philosophical Enquiry (CEPE). The conference was hosted by Tom Powers of the University of Delaware’s Center for Science, Ethics, and Public Policy and Department of Philosophy. The American Philosophical Association’s Committee on Philosophy and Computers also helped sponsor the conference.

Philosophers and technologists submitted papers and proposals in February, and a committee put together a program of 31 presentations and six symposia. Topics included the nature of computation, the role of computation in science, big data, privacy and surveillance, the dangers and benefits of AI, autonomous machines, persuasive technologies, research ethics, and the role of ethicists in computing and engineering.

Many of the conference participants displayed an underlying preoccupation with the ways our relationship with machines change as machines acquire characteristics that we have always considered to be distinctively human. Two specific concerns were the danger posed by machines as they become more autonomous, and the potential threat to human virtue as intelligent machines become capable of playing more human-like roles in sexual activities.

Machine ethics and autonomy: Bringsjord and Verdicchio

Selmer Bringsjord and Mario Verdicchio gave presentations on the dangers of machine autonomy. The basic worry motivating these discussions is this: If machines are under the control of a person, then even if the machines are powerful, their danger is limited by the intentions of the controllers. But if machines are autonomous, they are ipso facto not under control—at least not direct control—and, hence, the powerful ones may be quite dangerous. For example, an industrial trash compactor is a powerful piece of equipment that requires careful operation. But a trash compactor that autonomously chooses what to crush would be a much more formidable hazard.

Bringsjord considered a more nuanced proposal about the relationship between power, autonomy and danger, specifically that the degree of danger could be understood as a function of the degree of power and degree of autonomy. This would be useful since most things are at least a little dangerous. From practical and ethical perspectives, we would like to know how dangerous something is. But understanding degrees of danger in this way requires making sense of the idea of degrees of autonomy. Bringsjord aimed to accomplish this while operationalizing the concept of autonomy enough to implement it in a robot. In earlier work, Bringsjord developed a computational logic to implement what philosophers call akrasia, or weakness of will, in an actual robot. His aim in his current work is to do something similar for autonomy. In his presentation, Bringsjord outlined the features of autonomy that a logic of autonomy would have to reflect. Roughly, a robot performing some action autonomously requires the following: The robot actually performed the action, it entertained doing it, it entertained not doing it, it wanted to do it, it decided to do it, and it could have either done it or not done it. Bringsjord concluded that a powerful machine with a high degree of autonomy, thus understood, would indeed be quite dangerous.

Again, the special danger in autonomous machines is that, to the extent they are autonomous, they are outside of human control. When present, human control is a safeguard against danger, since humans are typically bound and guided by moral judgment. For a machine beyond human control, then, it is natural to want some substitute for morality. Machine ethics is the study of the implementation of moral capacities in artificial agents. The prospect of autonomous machines makes machine ethics particularly pressing.

However, Verdicchio argued that only some forms of what we might think of as autonomy require machine ethics. Like Bringsjord, Verdicchio understands autonomy as the conjunction of a variety of factors. They agree that autonomous action requires several alternative courses of action, consideration (put in more naturally computational terms, simulation) of those possibilities, and something like desire or goal-directedness toward the action actually performed. But it is regarding this last element that we also find some disagreement between Verdicchio and Bringsjord. According to Verdicchio, even with the other elements that plausibly constitute autonomy, goal-directedness is not enough to make machines distinctively dangerous. He argued the kind of machine autonomy that should worry us—the kind that calls for machine ethics—would be realized only when the machine sets its own goals, only when it is the source of its own desires. Without this capacity, it is simply a complex machine, perhaps one that has become more complex on its own, but still one that is directed toward the ends of its creators or operators.

Is Verdicchio right that we can dispense with machine ethics unless machines can set their own ends? It is an interesting question. Likely, the answer depends on how we understand the scope of machine ethics. A distinctive feature of AI systems is that they are capable of solving problems or achieving goals in novel ways, including ways their programmers did not anticipate. This is the point of machine learning: Not all of the relevant information and instructions need to be given to the machine in advance; it can figure out some things on its own. So, even if the machine’s goal is set by programmers or operators, the means to this end may not be.

To frame the point in familiar philosophical terms, a machine’s instrumental desires may vary in unpredictable ways, even if its intrinsic desires are fixed. If constraining these unpredictable instrumental desires within acceptable limits is part of machine ethics, then it seems clear that machine ethics is required for some machines that lack the capacity to set their own ultimate goals. But, on the other hand, putting constraints on the means to one’s given ends is a rather thin and limited part of what we usually consider ethics. And perhaps we can think of such constraints simply as additional goals given to the machine in advance. Ultimately, whether or not we consider the construction of such constraints part of machine ethics probably depends on the generality, structure and content of these constraints. If they involve codifications of recognizably ethical concepts, then the label ‘machine ethics’ will seem appropriate. If not, then we will be more likely to withhold the label.

But this semantic issue should not detract from the important point raised by Verdicchio’s presentation. The autonomy of a machine that could set and adjust its own ultimate goals would raise much deeper concerns than one that could not, since such a machine might eventually abandon any constraints specified in advance.

Symposium on sex, virtue, and robots: Ess, Taddeo, and Vallor

Sophisticated, AI-powered robots with sexual abilities, or sexbots, do not yet exist, but we have every reason to believe they will soon. It is hard to imagine the ever-booming and lucrative digital sex industry not taking advantage of the development of new advances in personally interactive devices. Sexbots were the focus of a symposium called: “Sex, Virtue, and Robots.” John Sullins moderated a discussion between the audience and a panel composed of Charles EssMariarosaria Taddeo, and Shannon Vallor. The panelists applied the framework of virtue ethics to address the question of whether having sex with intelligent robots was morally problematic.

More than competing theories of normative ethics, virtue ethics puts special emphasis on human character traits. Specifically, virtue ethicists hold that actions are to be evaluated in terms of the character traits—the virtues and vices—that the actions exemplify and produce. Given that people have been using sex dolls—human-size dolls with carefully crafted sexual anatomy—and a variety of artificial masturbation devices for years, one might have thought that robots designed to provide sexual services would not raise new ethical issues. Regarding virtue ethics in particular, one might think that sex with robots is no different, in relation to a person’s character, from the use of masturbation aids. But that is not quite so obvious as it might have seemed at first. What distinguishes sexbots is AI. Not only are sexbots supposed to be realistic in their look and feel, the interactive experience they promise is intended to be realistic as well.

So, sexbots promise a more interactive, personalized—perhaps intimate—experience than the minimally animated sex dolls and toys of today. Why would that matter? The panelists were largely in agreement that there was nothing intrinsically wrong with people having occasional sexual experiences with robots. But they all shared some version of the worry that sex with AI-powered robots might displace other intrinsically valuable sorts of activity, as well as the character development said activities might enable and promote. Ess invoked philosopher Sara Ruddick’s distinction between complete sex and merely good sex, the former being distinguished not just by the participants’ enjoyment but also by equal, mutual desire and respect between the individuals involved. Ess’s worry is that, without a capacity for deep practical wisdom and the genuine sort of autonomy we believe ourselves to have, sexbots couldn’t possibly be participants in complete sex. If robots became a replacement for human sexual partners, then complete sex is an important good on which we might miss out.

One part of Taddeo’s worry is quite similar. Her focus was eros—an ancient Greek conception of love discussed by Plato. As Taddeo characterized it, eros is a maddening kind of love, the experience of which shapes a person’s character. Taddeo’s concern with eros is similar to Ess’s concern with complete sex. In both cases, the worry is that, to the extent that sexbots replace human partners, a distinctively valuable sort of experience would be impossible. Taddeo adduced several other pressing worries as well. One was that female sexbots would exacerbate a problem that we already find caused by pornography—specifically, the promotion of unrealistic stereotypes about women and the misogyny that this might produce. She also noted that reliance on robots for sex might complicate our romantic relationships in unfortunate ways.

Vallor’s primary worry about sexbots is similar to Ess’s concern about complete sex and Taddeo’s concern about eros. Like Ess and Taddeo, Vallor suggested that sexbots might displace some important human good. However, instead of focusing on the intrinsically desirable forms of sex and love on which we might miss out, Vallor focused on the processes of maturation and growth that comes from having sex (whether good or bad) with real humans. Our human bodies are fleshy, hairy, moist, and imperfect in a variety of ways. When people are sexually immature, they react to these features of bodies with fear and disgust. Sex with humans, Vallor suggested, is part of the process of leaving behind these immature reactions. She noted that failure to outgrow these sorts of fear and disgust is associated with vices like racism, misogyny, and self-loathing. Furthermore, the persistence of such fear and disgust can inhibit the development of those virtues—like empathy, care and courage—that have essential practical ties to our bodies. Sexbots offer the possibility of sexual gratification without engaging biological realities. Hence, use of sexbots, to the extent it replaced sex with human persons, might result in a stunted maturation process, producing persons who were more vicious and less virtuous.

The notes of caution sounded by the panelists were generally compelling. Not only does new technology absorb our time and attention, it necessarily alters and displaces activities we otherwise would have continued. This is unfortunate when the displaced activities were valuable—or, more precisely, when the old activities were more valuable than the activities that displaced them. But it was not altogether clear to all of the audience members that sex with robots was less valuable overall than the traditional option. A question along these lines was posed to the panel by Deborah Johnson, one of the conference’s keynote speakers. Her question was directed primarily at Vallor’s point about how sex facilitates the development of certain virtues. Johnson suggested that, perhaps, the elimination of traditional forms of sex would eliminate any important role for these particular virtues in sexual relations, and, perhaps, we could still develop these virtues as they applied to other contexts. And, if so, a world in which we lacked both traditional sex and the virtuous traits acquired through it might be just as good as our present situation. In response, Vallor held that the practical scope of the virtuous traits sex helps us learn is broader than just sexual activity, and so their loss would be felt in other areas, too.

Vallor’s response seems correct, though the issue ultimately depends on psychological facts about exactly what experiences the acquisition of particular character traits requires. Regardless, Johnson’s objection is exactly the sort of challenge we should take seriously. As technological change creates new sources of value at the expense of earlier sources, too often the focus is exclusively on what has been lost or exclusively on what has been gained. In contrast, a better approach looks at both, comparing the old and the new. This, of course, is not easy, and sometimes the old and the new will be incommensurable. Even so, it is vital that we bring the comparisons, the trade-offs, into clear view.

Concluding thoughts

Issues about autonomous machines and sexbots bring out two aspects of the uneasiness we experience as artificial entities become more like humans. For one thing, we care how machines behave. As they become more autonomous and less subject to our direct control, we want their behavior to serve our needs without endangering us. Secondly, we care about how the behavior of machines changes us—whether enhancing or supplanting our cherished human capacities and traits.

Reflection shows that the two sets of issues are bound together in complicated ways. When we wonder what sorts of changes are good for us, this calls the very notions of harm and danger into question. The risk of an industrial robot ripping a person in half is just one sort of danger. But we might well consider the potential of sexbots to arrest our development of virtue a different, but also quite fearsome, sort of danger. Furthermore, although machine ethics must attend to how machines’ choices affect persons’ health and physical well-being, a richer machine ethics would also consider how the actions of robots affect persons’ character, psychological well-being, and overall quality of life.

As more intelligent machines are developed, no doubt, we will encounter many new situations that raise difficult questions about the relationship between machine ethics and human ethics. The philosophers of IACAP and INSEIT will have plenty of important work to do for years to come.

 

Owen King

Owen King is the NEWEL Postdoctoral Researcher in Ethics, Well-Being, and Data Science in the Department of Philosophy at the University of Twente. His research is primarily focused on well-being, from both theoretical and practical perspectives.  He also investigates ethical issues raised by new computing and data technologies.

Digital Kidnapping—A New Kind of Identity Theft

 

It’s no surprise that new threats to personal security and privacy crop up as online communities change and grow. We’ve known about "sharenting" for a while—the tendency of parents to share every milestone in their child’s life online makes lots of personal information about children readily available to people looking to nab a new identity. But now there’s a new game in town: digital kidnapping. Digital kidnappers take screen shots of pictures posted on social media, blogs, and other online sites and use them for various activities, the most prevalent of which is online role-playing. Online role-playing has been around for decades, but only recently has it sparked outrage when a subgroup of this community, baby role-players, began stealing and repurposing online photos for their game.

Some members of the baby role-playing community are snapping up images of children on photo-sharing sites such as Instagram, Flickr, Facebook as well as various blogs to use as avatars or virtual children in their game. Players either pretend to be the child or claim the baby as their own and assign friends and other players to act as online family members to the child. There are even virtual adoption agencies where a role-player can request a youngster with a distinct look, which the “agency” seeks to fill by finding a matching image online. Participants search at #babyrp to find new “babies” for adoption or get chosen as a family member.

Psychologists theorize that many of these players are teens and tweens from less-than-optimal home situations who are fantasizing about having the perfect family. When interviewed by Fox News, child psychologist Dr. Jephtha Tausig-Edwards explains why these children are acting out these fantasies online: "They're going to do this maybe because they're bored, they're going to do this maybe because maybe they want some attention," Tausig-Edwards said. "They're going to do this because perhaps they really are a little envious and they would like that beautiful child to be their own.”

Other psychologists, like Dr. Justin D'Arienzo, admit that there are darker reasons why someone might be interested in these types of pictures. The internet has become a haven for fetishists and others who practice socially deviant behaviors, including those that require children or some element of childhood for their personal fulfillment or sexual gratification. And, although the children themselves are not being exploited—the photos in many cases have been recontextualized to play out a dark or abusive fantasy. For example, a recent thread of comments on an Instagram post regarding a baby boy has one commenter asking if he or she “can have a private with a dirty baby.”

However, one of the most recent cases of digital kidnapping didn’t involve role-playing in game form. Instead, an adult male from New York, Ramon Figueroa, stole online photos of a 4-year-old girl from Dallas and posted them on his Facebook page, claiming she was his daughter. He posted numerous pictures of the little girl, with the action in each shot lovingly described by the doting “father.” Some of the captions he wrote under the pictures of the little girl were, “Girl version of me,” and “This is how she looks in the morning…she said daddy stop (taking pictures).” After being contacted by the girl’s mother about his use of the photos, he promptly blocked her from seeing his page.

Unfortunately, there is currently no law against pretending someone is related to you. This little girl’s mother had only one option: To file a complaint with Facebook, which met with little success initially. Dismayingly, Facebook merely confirmed that Mr. Figueroa’s profile met their standards and, as such, there was nothing that could be done about the pictures if he didn’t voluntarily take them down. However, after being contacted by news media, Facebook recanted and agreed to remove posts of this nature as parents report them.

Shockingly, this type of photo theft doesn’t seem to violate the policies of most social sharing sites. Instagram (now owned by Facebook) explicitly states in their policy that when you post content through their system: “…This means that other Users may search for, see, use, or share any of your User Content that you make publicly available through the Service, consistent with the terms and conditions of this Privacy Policy and our Terms of Use (which can be found at http://instagram.com/about/legal/terms/).” So parents, beware: Whatever you post online through these channels can be reposted at will by anyone who can see it.

In response to the laissez-faire attitude of social media websites regarding these stolen photos, concerned parents got together and launched at change.org. The hope was to force Instagram to close down all baby role-playing accounts, but either due to lack of publicity or lack of interest, it garnered only 1,047 signatures. Of course, taking down the #babyrp account won’t do much to curb other types of digital kidnapping that are cropping up worldwide. A recent investigation by Scotland’s Sunday Post uncovered numerous instances of online photo theft. Over 570 selfies of Scottish girls; more than 700 selfies from girls in Northern Ireland; and thousands from young girls around the U.K. had been stolen and uploaded to a porn site. The girls were often in their school uniforms, but there were some instances where skin or underwear was showing. When confronted, the representative for the website denied that the images existed and because the site was out of the country, there wasn’t anything further that could be done.

Another young British girl had her personal images stolen from her social media account and posted on a website that offered “hot horny singles in your local area.” When her photo popped up on a sidebar advertisement for the sex site on a friend’s computer, he called to let her know that her pictures were being used. She has since updated her Facebook privacy settings in the hopes of preventing future occurrences.

Until this issue gets more attention from legislators and stricter privacy regulations are implemented, you are the first line of defense against this kind of identity theft. Fortunately, there are things you can do to protect yourself or your loved ones from digital kidnapping.

The first and most failsafe option is to stop posting pictures online. However, if you do choose to share them, you should monitor and adjust your privacy settings so that only people you know have access. Alternately, you can choose a privacy app, such as Kidslink, that allows parents to determine who sees their photos across social media programs. For those who refuse to curtail their online sharing, there are also apps that will watermark your images to deter would-be photo borrowers. Another protective action critical for people who wish to continue unrestricted photo posting is to turn off the geolocation option on images so that they will not reveal the real-world location of your child.

Finally, if you’ve previously posted pictures without putting privacy protections in place and you’d like to see if any of them are being used without your permission, do a reverse image search on your photos. You can use a site like TinEye that offers this service for free, or you can drag an image from your computer into the search box on Google Chrome or Firefox to reverse search. You can also go to images.google.com, drag an image into the search bar and press enter. Any website on which the image appears will come up in the search results, as well as visually similar images.

Ways to steal personal information are quickly outpacing protective measures granted to internet users through general legislation or attempts at self-governance by internet entities, such as Facebook and Twitter. The deficiency of guidelines regarding the acquisition of posted photos leaves the onus of providing identity protection, particularly to minors, firmly in the hands of parents. Parents should take the time to fully understand and consider all of the ramifications of posting photos online, including reading and comprehending the privacy policies of each online forum they use. While setting up appropriate safeguards is important, it is also critical to police the distribution of the photos and information around the internet through reverse image searches so images acquired and used without permission are found quickly. The earlier a child’s photo is removed from an unknown site, the more protection that child is afforded from repercussions in the offline world.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Crime and Punishment: The Criminalization of Online Protests

 

Online petitions, boycotts and speaking out on social media are common ways to raise your voice about a particular issue or individual. But a more controversial method, hacktivism (hacker+activism), has been increasingly employed to further agendas. Hacktivism is defined as hacking, or breaking into a computer system, for political or social ends, and it is currently illegal. Proponents claim hacktivist actions mirror real-world protests but incur harsher penalties because they are carried out in the online environment. Are they right and are hacktivists indeed treated in a way that violates our notions of justice and fairness?

The Computer Fraud and Abuse Act (CFAA), also known as the “anti-hacking law,” was created in 1984 to criminalize unauthorized access to computers. Since then, the law has been modified five times, with each modification resulting in a broader definition of what constitutes “unauthorized access.” Opponents of the CFAA argue that the expansion potentially regulates every computer in the U.S. and many more abroad. Intentionally vague language within the law allows the government to claim that something as minor as violating a corporate policy (as in the case of the United States v. David Nosal) is equivalent to a violation of the CFAA, putting even minor offenders at risk for serious criminal charges. But is comparing hacktivism with real-world protests an apples-to-apples equation?

Hacking has been around for decades. Steve Wozniak and Steve Jobs first hacked into the Bell Telephone System in the mid-70s with the famous “blue box” to place (i.e. steal) free long-distance calls. In the mid-1980s, a college student protesting nuclear weapons released a computer virus that took down NASA and Department of Energy computers. And in 1999, Hacktivismo, an international cadre of hackers, created software to circumvent government online censorship controls, as they believe freedom of information is a basic human right. Since then, the rapid proliferation of online groups able to shut down individual, corporate and even government computers has become a focus for the FBI and other agencies concerned about this trend.

Hacktivism made headlines in 2010 when the group Anonymous reacted to events arising from the arrest of WikiLeaks leader Julian Assange. Assange’s detainment, which coincided with the WikiLeaks release of classified information hacked from U.S. intelligence channels, had supporters outraged. Feelings escalated when the website recruiting donations for his defense was left reeling by the refusal of MasterCard, Visa and PayPal to handle donations earmarked for Assange’s aid fund. Anonymous fought back by hacking into and disrupting the websites of all three financial companies, causing service outages and millions of dollars in damage.

Anonymous achieved its goal by mounting a distributed-denial-of-service (DDoS) campaign. Interested parties could join the Anonymous coalition by direct participation or by downloading a tool that allowed their computer to be controlled by Anonymous operatives. Dr. Jose Nazario, a network researcher with Arbor Networks, claims that it takes as few as 120 computers linked together in this way to bring down a large corporation’s web presence. Anonymous insists this technique is not hacking; it is simply overloading a website by flooding it with traffic that makes it impossible to load pages for legitimate visitors. According to Dylan K., an Anonymous representative: “Instead of a group of people standing outside a building to occupy the area, they are having their computer occupy a website to slow (or deny) service of that particular website for a short time.” But this is not equivalent to a real-world protest: hacktivists don’t need the voices of thousands for their protest to be effective. Less than 120 computers would suffice to take down an entity—something 120 people on the sidewalk could not manage.

The FBI soon unearthed the identities of some of the hacktivists involved in various Anonymous hits. One, Fidel Salinas, was charged first with simple computer fraud and abuse. By the end of seven months, there were 44 counts of felony hacking looming over him for his part in disrupting government servers. Salinas claims the escalating charges were due to the FBI increasing pressure on him to turn informant. This kind of “encouragement” is nothing new. Cybercriminal and Anonymous hacker Hector Xavier Monsegur, under the internet alias “Sabu,” initiated the high-profile attacks on MasterCard and PayPal in response to the Assange arrest. By 2012, Monsegur had been arrested and was busy working in concert with the FBI to unearth the identities of other Anonymous members, who were then prosecuted under the CFAA.

The Electronic Frontier Foundation (EFF) which, according to its website, is “the leading nonprofit organization defending civil liberties in the digital world,” is promoting reform of the CFAA through consumer education, petitions and other legal means. One of their central arguments for CFAA reform concerns the treatment of hacktivist Aaron Swartz, who downloaded millions of scholarly journals from the JSTOR database, a subscription-only service, through MIT’s campus network. Swartz’s actions were predicated on his belief that publicly-funded scientific literature should be freely accessible to the taxpayers who paid for it. After his arrest, federal prosecutors charged him with two counts of wire fraud and 11 violations of the CFAA, amounting to up to 35 years in prison and over $1 million in fines. Swartz committed suicide a few days after declining a plea bargain that would have reduced the time served to six months in a federal prison. The EFF explains that if his act of political activism had taken place in the physical world, he would have only faced penalties “…akin to trespassing as part of a political protest. Because he used a computer, he instead faced long-term incarceration.” However, the EFF seems to gloss over the fact that, no matter how pure his reasoning, when Aaron Swartz played Robin Hood he wasn’t merely trespassing — he was stealing.

In response to Swartz’s untimely death, the EFF suggested changes in the way the CFAA calculates penalties, seeking refinement of overly broad terms and arbitrary fines. Its emphasis is on the punishment fitting the crime, and its hope is to align the CFAA’s penalty recommendations more closely with those received for the same acts when they arise during a physical political protest. The EFF is currently working on a full reform proposal that they hope will restrict the CFAA’s ability to criminalize contract violators or technology innovators while still deterring malicious criminals.

It’s true that the CFAA is too broad and may allow prosecutors to apply draconian charges for misdemeanor crimes, but the EFF is not taking into consideration the real harm done by hacktivist “protests.” A physical political protest is most often a permitted, police-monitored event. It may cause temporary (a few hours or days at most) disruption of business; garner media attention; and alert the public to the seriousness of the issue. The online protests staged by “Operation Payback”, Anonymous, and most recently, Team Impact, the Ashley Madison hacker(s), resulted in far more damage and disruption to the targeted organizations than would a “real world” protest. These acts are more akin to vigilantism or even terrorism since the hacktivists rely on intimidation in pursuit of self-defined injustice—and outcomes often involve harm to innocent people. If a physical protest had resulted in the same outcome—a company looted, lives destroyed and money lost—it would be considered a criminal act.

Hacktivists seem hardened against the collateral damage they inflict in achieving their goals, arguing that the end justifies the means. The recent Ashley Madison scandal is a great example of hacktivism without conscience. Hackers calling themselves Team Impact threatened Avid Life Media, Inc., the parent company of infidelity website Ashley Madison, to release information regarding their customers if they didn’t shut down the site. They believed that Ashley Madison was faking most of the female profiles available on the site to scam more men into signing up. When the company continued operating, Team Impact released the data, potentially ruining marriages, destroying careers, and compromising the personal data of users who now face threats of blackmail and identity theft. The company itself is facing $500 million in lawsuits, but the toll on its customers—the very people that Team Impact was claiming to help—was heavy indeed.

Similarly, Anonymous’ hacking of the PayPal website alone cost that company $5.5 million in revenue and damaged numerous small businesses and individuals who were unable to complete financial transactions during the shut-down.

Hacktivists claim their actions are equivalent to real-world protests and as such, should be protected from criminalization. It’s true that citizens’ right to peaceful public assembly is protected by the United States’ Constitution’s First Amendment and further guaranteed by the Supreme Court. However, the law is clear that the government can put restrictions on the manner, time and place of such a gathering to preserve order and safety.

The First Amendment does not guarantee the right to assemble when there is the danger of riot, interference with traffic, disorder, or any other threat to public safety or order. One group’s right to speak out should not conflict with rights of other individuals to live and work safely. This should be true online as well as in the physical world, but hacktivists often act outside of this stricture. Mikko Hypponen, chief research officer for F-Secure, sums it up well: “The generation that grew up with the Internet seems to think it’s as natural to show their opinion by launching online attacks as for us it would have been to go out on the streets and do a demonstration. The difference is, online attacks are illegal while public demonstrations are not. But these kids don’t seem to care.”

Online groups should not be allowed to achieve their desired results using extortion, intimidation, terror or vigilantism. But it is equally important that governments and corporations not have the right to sway, direct, or otherwise channel the free will of the people toward or away from any one purpose by using force or fear of penalty. And setting laws in place that make non-violent, non-damaging civil disobedience a major infraction of the law is tantamount to muzzling free speech. Gabriella Coleman, Assistant Professor of Media, Culture and Communication at New York University writes that if DDoS attacks by hacktivists are always deemed unacceptable, that this would be “damaging to the overall political culture of the internet, which must allow for a diversity of tactics, including mass action, direct action, and peaceful of (sic) protests, if it is going to be a medium for democratic action and life.”

Both sides are wrong to some extent. The problem with internet hacktivists is the veil of anonymity behind which they hide. Real-world political protests require that people stand up for what they believe—physically. They put their faces out there, sign their names on petitions and take responsibility for their views. The Supreme Court has ruled that anonymous speech deserves protection, but hacktivism is not speech—it is action. Hacktivists can intimidate and extort individuals, corporations, and governments without having the courage to step forward. Sometimes, people will take actions anonymously that they would not under scrutiny, a truism that makes the groups like Anonymous capable of causing chaos on a worldwide scale.

There can and should be many ways to speak your mind and promote your political agenda online, and you should be able to do so without fear of reprisal from law enforcement. However, intentional damage inflicted by anonymous disruptive mass action can also hurt unrelated innocent individuals. With our society’s level of reliance on internet services for business and daily living, hacktivist activity has potentially far-reaching consequences. Shutting down banking or payment capabilities doesn’t just hurt the targeted banks and credit card companies; it prevents many small businesses and individuals from conducting necessary business and impacts their daily lives in a negative way. Releasing the personal data of subscribers or customers to harm a government or company doesn’t just hurt the target—it sets thousands, sometimes millions, of lives on edge.

And let’s face it: Breaking into a store in a “real world” protest, stealing its customer lists or proprietary data and either disseminating it or destroying it is not trespassing. It’s not a misdemeanor. It’s not peaceful. It’s theft at best and terrorism at worst.

Online activists should mount an up-front, highly publicized, web-based boycott of their opponent—peacefully and legally—to exercise their freedom of public redress in the way in which the Constitution intended. Team Impact could have constructed a viral message letting people know that Ashley Madison was scamming them and easily made their point without the collateral damage. And governments who are interested in keeping discourse alive need to take a step back from the edge of fascism by narrowing their definition on “unauthorized use” of computers to prevent minor instances of online civil disobedience from being classified as criminal offenses.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Guilty in the Court of Social Media

 

In the wake of the Ashley Madison hack that exposed 32 million cheaters and the public ruin of dentist Walter Palmer, there is no better time to discuss charges of guilt by social media. Communication platforms have given users of the world tremendous collective power to prosecute and punish. We—the Facebookers, bloggers, tweeters (and re-tweeters)—are an unstable, finicky force that turns lives into a tailspin for both alleged and verified offences. Even as they level the playing field by keeping an eye on powerful figures, online ‘courtrooms’ lack what official jurors can provide—a finite and predictable sentence.

The inherent ethical problem with social media condemnation is its permanence. Unlike weighed, professional consequences or water cooler gossip, online defamation remains there for an unknown period of time, wreaking unpredictable chaos. The correlation between a crime’s severity and its punishment ceases to exist on public platforms.

Take a look at the fall of Jonah Lehrer, a bestselling author caught embellishing quotes in several works, including his bestselling Imagine, a book about the neurology of creativity. After an ambitious reporter dug through Lehrer’s work to find false statements, bloggers and social media users jumped in with their take on the matter. “Smugly self-satisfied and pseudo-intellect are not a pretty combination,” wrote one commenter. “I’ve gone from sad to angry,” tweeted a professor who examined the problem.

In his book, So You’ve Been Publicly Shamed, British journalist Jon Ronson describes Lehrer’s seemingly endless personal and professional descent. Within days of the revelation, Lehrer’s publishers recalled his book and offered refunds to buyers. Jonah had to resign from The New Yorker, and Wired severed its ties with him. Unsurprisingly, the ethics lecture he was due to deliver at Earlham College was swiftly canceled.

Losing a job and being dropped by a publisher are expected, justifiable consequences. Having a career ruined at the hands of buzzing bloggers and feisty commenters is not. When it comes to doling out punishments, all social media users get to chime in, and comments about offenses are tracked open-endedly. Even when they are not interested in causing lifelong difficulties, Internet commenters cannot predict longevity.

Unsurprisingly, fears of lifelong smears lead many digitally shamed individuals to respond with pleas for forgiveness. Public apologies written by relatively unknown digressers developed hand-in-hand with social media, and they are oddly discomforting phenomena. Like Josh Duggar, the reality TV personality caught cheating on his wife, and Justine Sacco, the PR specialist who published an inappropriate tweet, Jonah asked us for a second chance. He offered his apology in front of an audience of 300 at the Knight Foundation’s Media Learning SeminarLivestream broadcast the speech, and he spoke next to a large Twitter feed that displayed comments from those who wished to offer their two cents. He addressed the audience saying:

My mistakes have caused deep pain to those I care about. I’m constantly remembering all the people I’ve hurt and let down. Friends, family, colleagues, my wife, my parents, my editors. I think about all the readers I’ve disappointed . . . I have broken their trust. For that I am profoundly sorry. It is my hope that some day, my transgressions might be forgiven.

This speech belongs behind closed doors, directed at focused listeners. Lehrer’s offense was not severe enough to involve a formal courtroom or a public plea for compassion; it certainly did not belong on a public stage where uninvolved tweeters could weigh in.

There is something awe-inducing yet uneasy about the collective power social media possesses. Online, posted tirades stack up, living on blogs and in tweets long after their authors forget about them. Individually, we can’t take back the impact of a throng, even when someone asks us to try.

The permanence of online judgment makes it all but impossible for offenders to wipe their slate clean and restart their careers. Moderators can delete particularly vile or threatening comments from old threads, but erasing the most cringe-worthy thoughts cannot undo established perceptions. It is unethical for us to punish wrongdoers indefinitely, but permanence is a core aspect of social media.

But as brutal as our criticisms can be, it would be shortsighted to completely ignore the empowerment associated with social media judgment. An online jury keeps powerful figures and oppressive unknowns in check by democratizing the justice system. You may have the money to hire great lawyers or garner the support of locals to discriminate, but if the greater public gets wind of your offense, your options for redemption shrink drastically. Social media is just too large and too widespread to quiet.

Online rants ignite individuals to come together and promote a cause, albeit through aggressive means. The ongoing saga of a Kentucky county clerk who refused to issue marriage licenses to gay couples despite the U.S. Supreme Court’s ruling to legalize gay marriage exemplifies activism prompted by social media outrage. As news of Kim Davis’ refusal to sign paperwork was released, people passionately expressed their pain and anger over continued legal obstacles.

The lawsuits filed against Davis cast her into the limelight, leading readers to share reactions and respond collectively. A USA Today update on the story garnered over 96,000 Facebook connects, 3,000 tweets, 1,000 LinkedIn shares and 4,400 comments. “She needs to take her bible and go stand in the unemployment line!” wrote one person. “I’ve seen better looking heads on lettuce!” wrote another. The headlines incited mass protests and even motivated a non-profit organization to erect a billboard calling Davis out. “Dear Kim Davis,” it began. “The fact that you can’t sell your daughter for three goats and a cow means we’ve already redefined marriage.”

Davis offended a plethora of people and ignored court rulings, so neither the protests nor her jailing were unwarranted. The billboard was a bit much. After stepping back from the situation and cooling off a bit, it is important to remember that social media reactions will likely haunt Davis and her family for years to come.

It is impossible to tell whether the Lehrer and Davis shaming will prove to be a temporary setback or a lifelong scarlet letter. Punishments that correspond with the degree of an offense and concessions resulting from good behavior are not privileges enjoyed by those judged online. Readers are eager to see justice restored, and the need for balance is often achieved through punishment.

Upon reflection, I believe that most of us don’t want to be responsible for lifelong joblessness, disgrace and the unavoidable ripple effects that public prosecutions have on families, coworkers and friends. Unfortunately, the effect of rage expressed online is ultimately out of our hands. Long after the storm has weathered, guilty parties and their families are left to pick up the pieces.

The court of social media is a place to be feared. It is not structured or vetted, yet its users deliver long-term punishments that outweigh crimes. If we suspect that we have been too harsh in our words, we can always step back, log off and separate ourselves from an outrage. And we will probably never know the extent of our impact. As Ronson wrote, “The snowflake never needs to feel responsible for the avalanche.”

 

Paulina Haselhorst

Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.

Altered Image, Vanished Trust: Photojournalism in the Age of Digital Manipulation

 

Every day, the trusting public looks at the work of photojournalists online, in magazines or in newspapers, assuming those visual representations of news events are truthful. These all-important images can inspire, spark debate or incite anger, action or even rebellion.

So what happens when an image is changed, whether through setting up a scene or through (digital) manipulation? There are dozens of software applications that can easily change a photograph to show whatever the creative mind of the manipulator wants it to show, for good or for ill.

The question of how a photo should be – if at all – manipulated for public consumption is debated appropriately through a recent photography show in New York City. The Bronx Documentary Center hosted a curated display entitled, "Altered Images: 150 Years of Posed and Manipulated Documentary Photography,” which garnered national attention for its tricky yet important subject matter.

According to organizer and photographer Michael Kamber, he created the exhibit to show some of the most controversial examples of manipulated photojournalism. These photos ranged in time from as early as the American Civil War to this year’s World Press Photo contest. In that prestigious contest, some 20 percent of participants were disqualified for digitally altering their submitted images.

“The World Press Photo Contest must be based on trust in the photographers who enter their work and in their professional ethics,” said Lars Boering, the managing director of World Press Photo, in a statement about the contest controversy. “We now have a clear case of misleading information and this changes the way the story is perceived. A rule has now been broken, and a line has been crossed.”

A line, indeed, has been crossed. A photo cannot be changed digitally in any way other than cropping it for size for it to be considered true, accurate and fair to the viewer. Period. There is no way around this if you want to gain or keep the public’s trust and respect.

One thing that was notable from the exhibit is that several of these manipulated photos were caught because someone other than the photographer altered them. The photographer noticed the change right away in these cases and contacted the editor or publication to report the problem. But once the image is out there for the public’s consumption, the damage is done.

As a news reporter with more than 20 years of experience, I can say that I have worked with some of the finest photojournalists in the nation. I consider my years at The Detroit News among my most enjoyable, especially because of the photographers I worked with. The Photo Desk had a standard that was never doubted or questioned: You never set up a photo. Never.

What does “set up a photo” mean? It meant that you didn’t send a news photographer to a ribbon cutting; that wasn’t going to end up in our newspaper. You didn’t tell a source to prepare a “fake” scene for the photographer to capture. You didn’t give the photographer a set-up moment to show off the story’s central theme. If the story didn’t happen when the photographer was there, there was no story. A photo had to happen naturally – like the photographer was a fly on the wall and observed things as they happened, capturing the image as if you and that photojournalist were there together watching the story unfold.

I believed in that mantra then, and I still believe in it now. I trusted every image that I saw in the newspaper then, and I want to believe in every image that I see now. But when you see the problems that have come up within the photojournalism world because of digital manipulation, you see why this trust has been shaken.

If you think that I’m taking too strong a stance, let me back it up with comments from three photographers that I have worked with on a regular basis, all of whom say the same thing. If you see their photos, you should trust that they have been created honestly and without digital alteration. If you’re creating art, that is one thing and some changes from the original photo are to be expected. However, and they could not have been more adamant about this, if you are purporting to be a photojournalist and presenting news, that is something entirely different.

Jessica Muzik comes to the subject from two points of view. She is the Vice President of Account Services for Bianchi Public Relations, Inc., as well as the owner of Jessica Muzik Photography LLC. Her photographs have been published both online and used by news organizations.

“I don’t think one can lose more credibility as a photojournalist than to alter or set up photos,” Muzik said. “The public trusts photojournalists to capture real moments and timely events, not to compromise their ethics by altering an image to fit the needs of a particular media outlet.”

“In my line of work, I always say that what the media reports is considered 10 times more credible than any advertisement that can be placed because we trust that the media are objective in all matters and that includes photojournalists,” Muzik added. “If a photojournalist feels the need [to] alter or set up an image, that is not photojournalism, but rather photography.”

Asia Hamilton is the owner of Photo Sensei, a company that offers photography workshops to professionals and amateurs in several cities. Her goal in part is to help people in image-sensitive cities, including Detroit, show off their photo skills with respect to themselves and the community, demonstrating both their creativity and the city’s best assets.

Because Detroit often gets a bum rap when it comes to its “ruin porn,” or images of the city’s abandoned or burned out buildings, Hamilton often works with people to find other ways to highlight Detroit via her Photo Sensei classes. Thus, she too has a tough stance when it comes to manipulating an image within the news realm.

“I think photo altering is ok if the photography is art or editorial related,” Hamilton said. “However, photojournalism should not be altered because it is a documentation of facts. The news can only be trusted if it is completely factual.”

My favorite comment came from John F. Martin, a news photographer who has a commercial business that does work for news agencies as well as corporations.

“Staging or otherwise manipulating an image from a news event is lying, plain and simple. It's no different than a writer making up a quote. This was instilled in us on day one of journalism school (Ohio U, '96). It turns my stomach when I read about these seemingly increasing incidences,” Martin said.

That’s the crux of the problem, isn’t it? Photo manipulation has happened too much and too often. That’s reprehensible and cannot be allowed to stand. The situation has grown so dire that for-profit businesses have been established to find and expose photo manipulation.

The company in question is called Fourandsix Technologies Inc., and its founder Dr. Hany Farid recently introduced a new service, Izitru. Its purpose is to allow anyone who puts their images online to prove without a shadow of a doubt that these images are authentic. They can do this by allowing the photos to be tested, thereby receiving a Izitru “trust rating” for any viewers.

Yes, the world of photojournalism has come to that—a trust rating—frightening and unacceptable.

 

Karen Dybis

Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.

Is Ad-blocking Theft?

 

No one likes online ads. Nobody enjoys a life-size shampoo bottle dancing across the morning news or a pop-up window obscuring cute cat pictures. “Creepy” best describes remarketing ads, products you looked at once that follow you around the rest of the internet.

So out of the internet’s collective cry of, “DIE, ADS!” ad-blocking software was born. According to The Economist, an estimated 200 million people worldwide use ad-blocking software regularly, up dramatically from about 25 million in 2009. The makers of AdBlock Plus, the most popular ad-blocking service, say their software has over 400 million downloads to its credit.

Ad-blocking initially worked on desktop and laptop computers. But as mobile adoption increases, the ad dollars—and ad-blocking software—follow. In mid-September 2015, Apple released iOS 9, its first operating system to have built-in ad-blocking support for Safari (its default web browser). A few days later, Tumblr Founder Marco Arment released his $3 ad-blocking app called Peace, which quickly shot to the #1 paid app in the iTunes Store (at least before he pulled it due to a crisis of conscience; more on that below).

And why shouldn’t you block ads? They’re ugly, distracting, and often include a tracking pixel that mines your personal data and online activity.

Bye-Bye, Privacy

It’s eerily obvious to anyone who has ever been targeted by Facebook’s ads: The finely tuned targeting options go way beyond what people post to the site. Most people, for instance, never explicitly reveal their salary, favorite products, or intention to buy a car—but Facebook has ways of finding out. As online ad blogger Larry Kim writes, “Getting married soon? Taking medication for hypertension? Love reading murder mysteries? Facebook probably knows.” But how?

It’s not just Facebook, though. Websites increasingly harvest your personal info, evidenced only by a tiny pop-up bar that says, “Using our site means you agree to our terms and conditions.” Those terms and conditions can include allowing the site’s publisher to track your behavior internet-wide. Data brokers then compile this information to build a profile on you and sell it to advertisers. Big Data company Acxiom, for example, has an average of 1,500 “data points” for the average person—details you never explicitly divulged, like being an empty nester with a net worth below $100,000 or having a new teen driver in your household.

“All of that tracking and data collection is done without your knowledge, and—critically—without your consent,” wrote Arment in a blog post on ad blocking. “By following any link, you unwittingly opt into whatever the target site, and any number of embedded scripts from other sites and tracking networks, wants to collect, track, analyze, and sell about you.”

It’s chilling, which is why people love Ghostery, a sort of ad-blocker on steroids. Ghostery goes beyond simply blocking ads. It identifies trackers on the Web pages you’re on and allows you to block them collectively or individually. For example, while browsing your favorite news site, you can choose to block BlueKai, which Ghostery calls “the largest auction marketplace for all audience data,” and the ominously named “Scorecard Research Beacon” while enabling commenting widgets. It’s brilliant!

Fighting Back—or Punching Down?

But websites that need ad revenue might disagree. After all, you’re supporting them by passively allowing advertisers to market to you. Reading a listicle, essay or slideshow is free; running a media platform is not. Conventional wisdom says ads keep the lights on at little nonprofit news sites and indie style blogs.

For some, it’s true. In a profile, Columbia Journalism Review twice asserts how central ad dollars are for Rookie Magazine, Tavi Gevinson’s online mecca for teen girls. Ken Fisher, writer for technology news website Ars Technica, pleaded in 2010: “Imagine running a restaurant where 40 percent of the people who came and ate didn't pay. In a way, that's what ad blocking is doing to us.”

Similarly, guilt over harming the little guy was Arment’s motive for killing his ad-blocking app. “Ad blockers come with an important asterisk: While they do benefit a ton of people in major ways, they also hurt some, including many who don’t deserve the hit,” he wrote on his blog. That’s the thinking behind an option in Adblock Plus called Acceptable Ads. Users can now opt to block all ads or allow the less-ugly, less-intrusive ones through.

However, ads aren’t the be-all and end-all. Some nonprofit news sites rely heavily on grants and donations; ad revenue is a smaller piece of the pie. Fashion bloggers need readers to buy their Etsy wares and click on their affiliate links. Ads are important, yes, but they aren’t everything.

And because ads and online trackers are increasingly creepy, some are hoping online users will be willing to pay a subscription fee instead. As TechCrunch contributor Danny Crichton writes, “Rather than scream “theft!” at [ad-block users], we need to find a new approach that takes into account their fears while ensuring that online media still has a place in business.”

Another Way

A few sites are wising up. Dating site OkCupid shows a message to ad-blocking users complimenting their tech savvy (“You’re using ad-blocking software like a boss”) and asking for a reasonable one-time donation of $5. And The Guardian prompts donations with the message: “We notice that you’ve got an ad-blocker switched on. Perhaps you’d like to support the Guardian another way?”

At first, paying for something you’re used to getting for free seems ridiculous, especially to Millennials, less than half of whom pay for music, cable TV, or video games. But some are willing to shell out money to avoid ads.

TV service Hulu just started offering an ad-free package for $12 a month, only $4 per month more than its usual ad-studded package. “There's a small segment of the world that are just avoiding ads at all costs,” said Peter Naylor, the Hulu SVP of Ad Sales. “The good news with this new plan is that we have an opportunity to do business with them.” (Fine print: Seven shows still play ads before letting you watch them, so not even “ad-free” is completely ad-free.)

Hopefully a sustainable alternative will arise: a way to access the content you want, avoid ugly ads and invasive trackers, and support businesses at the same time. Crichton suggests Americans pay a flat monthly fee of about $15 tacked on to our internet bills or cell service payments in exchange for an ad-free internet. Last year’s online ad spending ($51 billion) divided by America’s internet users comes out to $14.17 per month, he reasons. Of course, this sounds great for us consumers, less so for the data brokers that want to build detailed profiles about our spending habits and then sell highly targeted ads.

Ultimately, the online advertising industry needs an overhaul. What Matt Asay wrote for CNET five years ago rings just as true today: “Online media publishers should change, as asking consumers to change is a recipe for failure—and for stagnation rather than innovation in business models. It's not the consumer's job to figure out a successful business model for the vendor.” Your move, publishers.

 

Holly Richmond

Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.

 

Crowdsourcing and Campaign Reform

 

With the 2016 presidential elections now on the horizon, there’s no escaping ‘the campaign’—whomever’s that might be. The proliferation of political advertising across all media, from billboards to iPhones, subjects the voting public to a constant barrage of targeted advertising. The strongest campaigns, in an attempt to master cross-platform messaging, have been making innovative strides in marketing and media, which certainly adds a sense of novelty to an otherwise staid political contest. One example is the gradual introduction of digital crowdsourcing into the political process. On one hand, crowdsourcing, as a novelty, is an essentially democratic notion, though beyond the surface, it is still exclusionary by nature. As many see it—including attorney, activist, and Harvard Law Professor Lawrence Lessig—the tricks of the contest may be new, but the game is the same…and it’s only getting more expensive to play.

As a primary indicator of this fracture, Lessig, in a recent article on Medium, points to the issue of money in politics, which is featured prominently in debates and campaign promises, but rapidly fades to the background in office. Indeed, there is a general consensus that money—and associated political and corporate corruption—should be at the top of the president’s priority list; second only to “creating good jobs,” as Lessig cites from this 2012 Gallup poll. But in practice, an official taking on finance reform is setting herself on a road towards destruction.

“To take on the influence of money is not to take on one party, but both parties,” Lessig writes. “The enemy of Congress is a failed president. […] So on the long list of promises that every normal president enters office with, the promise of reform always falls to the bottom.”

Broken promises of campaign finance reform may not be the explicit fault of a hypocritical chief executive, but rather the attrition caused by the mechanics of American politics in general. This leads us to Lessig’s central claim: “That on the issue of fundamental reform, an ordinary president may not be able to lead.” Instead, Lessig proposes the idea of a trustee president, which Lessig defines as prominent, “well-liked leader” who declares her presidential candidacy on a single issue. After this issue is resolved, the trustee president would step down and hand over the reigns to the VP for the remainder of the term. For finance reform, this trustee president “would use every power of the executive to get Congress to enact fundamental reform”—and then move on.

The idea of a trustee president is a unique proposition—perhaps even a ‘hack’ of the political system. “Our democracy will not heal itself,” Lessig writes. “Reform will not come from the inside alone. It needs a push.” Which is why, this past August, Lessig announced that if he raised $1 million by Labor Day, he would run for president on this exact model, in order to pass what his team has dubbed the Citizen Equality Act of 2017. As of Sept. 6, with money raised from 10,000 unique donations, Lessig is now attempting to run on the Democratic ticket (though according to his recent op-ed at Politico, the Democratic party isn’t being very receptive). The drama of Lessig’s thought-experiment-turned-real-experiment will be interesting enough to follow in the coming months, but what’s unique about his proposed Citizen Equality Act is that, in addition to being modeled on existing reform proposals, his will also “crowdsource a process to complete the details of this reform and draft legislation by the start of 2016” (emphasis added).

Crowdsourcing is a digital-age concept and term, though the notion of crowdsourcing in politics is a fundamentally democratic idea. However, that crowdsourcing is a digital concept keeps its referents firmly grounded in the present, as Tanja Aitamurto suggests in his 2012 book, Crowdsourcing for Democracy. He defines crowdsourcing as “an open call for anybody to participate in a task open online, where ‘the crowd’ refers to an undefined group of people who participate.” This is in contract to outsourcing, where a task is specifically assigned to a defined agent. Popular uses for crowdsourcing range from funding product or project development, as with sites like Kickstarter and IndieGoGo, to more refined applications, including urban planning, product design, mapping, species studies, and even solving complex technical or scientific issues. It’s only a small jump for crowdsourcing to be used for policy and reform, especially in democratic contexts, as Aitamurto demonstrates through various international case studies, including constitution reform in Iceland, budget preparation in Chicago, and the White House’s We The People petition system.

Aitamurto writes: “When policy-making processes are opened, information about policy-making flows out to citizens. […] Opening the political process holds the potential to increase legitimacy of politics, and increasing transparency can strengthen the credibility of policy-making.” In an ideal or direct democracy, especially in a modern context, crowdsourcing just makes sense, especially as a tool for mass communications and encouraging public participation. Our increased reliance on technology for economic participation, communication and citizenship casts crowdsourcing as a natural outgrowth of our cultural evolution, and in this way, it also just makes sense that crowdsourcing would be applied to politics. But ethically, crowdsourcing’s promise of cross-platform policy participation is not so equitable, especially when we begin to account for income and literacy as prerequisites for entry into the digital crowdsourcing process.

A few examples of how crowdsourcing is exclusionary, despite its ideal democratic applications, are the person without a smartphone, the family without an Internet connection, or the digitally illiterate citizen (i.e., the person who has never sent an email or used a computer). You can’t help crowdsource if you’re not part of the crowd; that is, if you’re on the wrong side of the digital divide. Of course, even before the digital age, income and literacy have long been limiting factors for democratic participation; they’ve simply found new media for the modern age. On this point, Aitamuro clarifies: “Crowdsourcing […] is not representative democracy and is not equivalent to national referendum. The participants’ opinions, most likely, do not represent the majority’s opinion.”

The natural limitations of the crowdsourcing process, as Aitamuro suggests, can be read as a downside of crowdsourcing in a democratic context, but if that democracy is broken—as Lessig and many others say it is—a crowdsourced tactic, despite its ethical complications, might be exactly the sort of push a broken democracy needs. But in order for it to really work, everyone needs to agree to the plan, and trust that its leaders will deliver on their promises. A task like that will take a lot of work, supercharged rhetorical finesse, and a massive amount of popular traction that Lessig currently lacks; a Suffolk University/USA Today poll from Oct. 1 lists a mere 0.47 percent of respondents stating support for Lessig. His plan makes sense, and the notion of entrusting the trustee with a crowdsourced reform is a novel reflection of idealized democratic values, but novelty in the political process is a mixed bag; especially in the midst of a pay-to-play political paradigm where not even Lessig could get a foot in the door without a cool million. In this way, crowdsourcing, despite being a novel digital-age concept, might simply be more of the same.

 

Benjamin van Loon

Benjamin van Loon is a writer, researcher, and communications professional living in Chicago, IL. He holds a master’s degree in communications and media from Northeastern Illinois University and bachelors degrees in English and philosophy from North Park University. Follow him on Twitter @benvanloon and view more of his work at www.benvanloon.com.

Consorting with Black Hats and Negotiating with Cybercriminals: The Ethics of Information Security

 

October is National Cyber Security Awareness Month, and the fact that it’s a month instead of a day speaks volumes about the growth and prevalence of cyber crimes.

The international security company Gemalto proclaimed 2014 as the year of mega breaches and identify theft. According to the company’s breach level index:

- 1,023,108,267 records were breached in 2014

- There were 1,541 breach incidents, which represents a 78 percent increase in breached records from 2013

How frequently are data records lost or stolen?

2,803,036

every day

116,793

every hour

1,947

every minute

32

every second

 

North America accounted for 76 percent of total breaches.

And 2015 is shaping up to be another stellar year – it has already produced high-profile security breaches involving Ashley Madison, CVS, Anthem, and even the IRS.

So far, the Ashley Madison hack has been the most high-profiled breach of the year. Ashley Madison is a social-networking site for married men and women looking to find partners for extramarital affairs, and claims to have 40 million users. To date, the site’s hackers have released seven years of credit card data, in addition to names, addresses and phone numbers - and the users’ desired preferences in potential partners. This breach has resulted in public embarrassment, marital strife, possible blackmail situations, and at least one suicide.

And while the breaches themselves are highly publicized, much less is known about the people behind the scenes who are charged with protecting company data, their responses to data breaches, and the ethical decisions they face.

A 2015 report by Alien Vault, a threat intelligence and security management provider, shines a spotlight on the many issues facing security professionals. Below are the responses to three questions selected from the survey portion of the report, along with ethical analyses of the respondents’ answers.

Question 1: Do you ever visit hacker forums or associate with black hats to learn about the security you need?

51%

Yes

48.3%

No

 

Javvad Malik, the report’s author, notes that some companies forbid interactions with black hats. A black hat is a computer hacker who breaks into computers and networks for malicious reasons, as opposed to white hats (who may be employees or consultants) who break in to locate and identify breaches. However, if the type of information needed to mount an effective defense is not available through legal channels, roughly half of respondents feel they need to do whatever is necessary to obtain credible data in a timely manner.

I spoke with Abraham Snell, who has an MBA in Technology Management from Auburn University and is a Senior IT Infrastructure Analyst at the Southern Company in Birmingham, Alabama. He views visiting hacker forums or consorting with black hats as an instance in which the means justify the end. “It is a brilliant idea,” Snell said. “It is just the reverse of criminals getting police best practices so they can be more successful criminals. In this case, the side of right is learning about the dark side before they strike. In some cases, this will be the only warning of things to come.”

Question 2: What would you do if you found a major vulnerability on a company’s system or website?       

61.7%

Privately disclose to them

12.0%

Publicly fully disclose

9.8%

Disclose without releasing details

9.5%

Do nothing

8.2%

Tell your friends

5.5%

Claim a big bounty

2.5%

Sell on the black market

 

While privately or publicly disclosing the vulnerability seems the most logical choice, it is not uncommon for companies to threaten legal action against the person reporting the security risk. Fortunately, only a small percentage of respondents would seek financial compensation, but it is troubling that almost 18 percent would either do nothing or just tell their friends. However, if companies provide a hostile environment in which this type of disclosure is not welcome, can security professionals be blamed for their lackadaisical attitude?

According to Snell, there are definitely ethical issues involved in the next steps taken when a vulnerability is discovered. “Even if this type of disclosure is not welcome, you have a moral obligation to reveal the vulnerability,” Snell said. “If the information is breached, people may have their financial and personal information stolen, even their identities may be stolen. If you fail to sound the alarm, you’re just as guilty as the people who actually steal the information because you knew it could happen and you did nothing.”

After viewing the other choices selected by respondents, Snell said they are negligent at best, and most likely criminal in most states. “Telling your friends, unless they are security experts or regulators, is the same as doing nothing,” Snell said.

Regarding the bounty, Snell said, “I’m unclear on how you claim a big bounty unless it becomes a major international issue because companies will not pay their own employees to do what they are already paying them to do.” And if the employee tried to claim a bounty anonymously, that could lead to various legal implications. “The vast majority of people who do what Edward Snowden did end up like he is … a man without a country,” Snell said. He also explained that selling the info on the black market is both unethical and illegal.

Question 3: If your company suffers a breach, what is the best course of action?

66.8%

Use the event to convince the board to give you the budget you need

25.7%

Tell the regulator, pay the fine, and move on

9.0%

If nobody knows, just keep quiet

6.6%

Go to the media and brag about how you ‘told them so’

 

Overwhelmingly, the survey respondents feel that the only way they can get the resources they need is in the aftermath of a major cyber attack.

In fact, former White House Cyber Security Advisor Richard Clarke once said, “If you spend more on coffee than on IT security, you will be hacked. What's more, you deserve to be hacked.”

Darryl Burroughs, deputy director of IMS Operations for the City of Birmingham, Alabama, shared an interesting perspective with me: “If a compelling case was made to increase the cyber security budget and the company blatantly refused to do so, the ethical dilemma rests with the Chief Financial Officer and others who make budget decisions that do not take into consideration IT requests,” Burroughs said.

He added, “The real question is what unethical decision did they make when they funded something less important than the company’s security?”

And that’s also a question that Sony’s Senior Vice President of Information, Jason Spaltro, has likely asked himself over and over again. Back in 2007, Spaltro weighed the likelihood of a breach and concluded, “I will not invest $10 million to avoid a possible $1 million loss.” At the time, that may have sounded like an acceptable business risk. However, in 2014, when the company’s data breach nightmare dominated the headlines - and late night talk show monologues - for months at a time, that $10 million would have been a sound investment.

Snell said there are a lot of factors that determine if using a breach to increase the budget is ethical or not. “I wouldn’t say most companies wouldn’t increase the budget anyway, but I would say that many current and previous executives are not trained in technology, so the threat of security breaches is not a topic that resonates with them,” Snell said.

As a result, he thinks that in many cases it takes a major incident to get funding funneled to the right programs that will protect the company. “The problem with security is that it is mainly a cost when things are going well.  You only see the wisdom of the investment after a breach occurs or is attempted.”

On the other hand, Snell said if the budget is adequate, and fear mongering is being used as a tactic to get more money, that is definitely unethical.

Negotiating With Cybercriminals

The process of retrieving stolen data from cybercriminals is another ethically murky area for security professionals. A recent whitepaper by ThreatTrack Security reveals that 30 percent of respondents would negotiate with cybercriminals for data held hostage:

However, 22 percent said it would depend on the stolen material. Among this group:

- 37 percent would negotiate for employee data (social security numbers, salaries, addresses, etc.)

- 36 percent would negotiate for customer data (credit card numbers, passwords, email addresses, etc.)

I also spoke with Dr. Linda Ott, a professor in the department of computer science at Michigan Technological University, who also teaches a class in computer science ethics, about negotiating with cybercriminals.

As with most ethical questions, she does not believe there is a simple answer. “One might argue that a company should be responsible for paying whatever costs are necessary to recover the data since it was presumably because of the company's negligence that the information was able to be stolen,” Ott said.

She explained, “However, unlike paying a ransom for the safe return of a person, the return of the data does not guarantee that the cybercriminals no longer have the data. And if they have a copy, paying the ransom merely amounts to enriching the criminals with no gain for the company whose data has been compromised.”

However, Ott noted that in certain situations the case for paying the ransom would be stronger. “For instance, if the company did not know what employee information was compromised, one might argue that they should pay for the return of the data,” Ott said. “In this scenario there is a benefit to the victims of the crime since they could be accurately notified that their information had been stolen.”

Big Brother: Friend or Foe

ThreatTrack’s survey also reveals a range of opinions regarding the government’s role in cybercrime extortion investigations:

- 44 percent said the government should be notified immediately and granted complete access to corporate networks to aggressively investigate any cybercrime extortion attempts

- 38 percent said the government should establish policies and offer guidance to companies who fall victim to cybercrime extortion

- 30 percent said companies should have the option of alerting the government to cybercrime extortion attempts made against them

- 10 percent said the government should make it a crime to negotiate with cybercriminals

Ott said the fact that most companies do not want government intervention is problematic. “Without government investigations of these matters, the cybercriminals remain free to continue their illegal activities,” she said. “This can ultimately lead to the theft of information of many more people.”

However, she explained, “Companies tend to do their analysis based on consideration of the impact on their reputation and the potential impact on their stock price, etc.  They have little motivation to consider the bigger picture.”

So, how long did it take you to read this article? If it took you five minutes, 9,735 data records were lost or stolen during that time frame. That’s why Burroughs concludes, “The question is not if you will be breached - the question is when.”

 

Terri Williams

Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.

Google Dominance or Information Age Misunderstanding?

 

Google. A brand so enormous, it’s a verb. We endlessly Google, but we never Yahoo or Bing. And many of us have never even heard of competing search engines like Gigablast, Yandex, or Qwant. This enormity is the problem—the massive Google has seemingly shouldered out most of the competition, leaving the remaining few to languish in the shadow of its greatness. Google’s rapid surge from startup to internet superpower has drawn the eye, and ire, of competitors and legislators alike.

Since its humble beginning in 1998, Google has grown exponentially through the application of superior and innovative technologies to become more than just a search engine. With the financial heft to divert substantial funding toward innovation, Google is always at the leading edge of new, exciting and helpful technological advances. In a show of savvy business sense, the company has eagerly bought up competitors and other successful companies to help spread its influence across the Web and the world. Some acquisitions are familiar, such as image sharing service Picasa, video sharing service YouTube, Web feed service FeedBurner, and mobile device manufacturer Motorola. Other additions to the Google family are not as mainstream, but they help provide Google with a full complement of technology from digital coupons and facial recognition to e-commerce, cloud computing, and more, although its core product remains search engine technology.

It is this search technology with its complicated algorithms that has caused national and international antitrust coalitions to take a dim view of some of the tech giant’s business practices. Over the past several years, Google has been accused numerous times of using its vast influence and market share to hinder competition.

So what exactly is Google’s share of the Web search market? comScore, an American internet analytics company, recently released its search engine rankings for September 2015, giving Google a 63.9 percent share of the market. Peter Theil, PayPal co-founder and author of Zero to One, a book that calls out Google as a monopoly, cites a similar percentage. But how accurate is this number? According to Priceonomics, Google’s share is much closer to 94 percent and is perhaps even higher if worldwide numbers are included. This discrepancy might be a reflection of the partnership between comScore and Google. The two have been coordinating on the creation of an audience metrics program, so it is possible that comScore has a vested interest in protecting their business ally from suggestions of dominance. It could also be a simple error of calculation although Google would probably draw even more unwanted interest by correcting it upward. Regardless, even 63.9 percent is an incredible share of a burgeoning market—one that amounted to more than $66 billion in profit for the company in 2014.

Despite its steadily increasing progress, Google tries to downplay its success whenever it can, and with good reason. In 2013, after several years of scrutiny, the Federal Trade Commission (FTC) launched an investigation into the business practices centered around Google’s alleged “search bias.” Search bias is the term for what complainants say occurs when Google exploits its search algorithms to promote its products over those of competitors.

The company disagreed vehemently with the allegations and sought to defend itself in the public eye as well as in the courtroom. To help sway public and regulatory opinion in their favor, Google helped put on several events at George Mason University’s Law and Economics Center in Washington, D.C. that purported to increase discussion about search competition on the internet. Attendees included FTC regulators, congressional staffers and federal and state prosecutors. Emails obtained by The Washington Post revealed Google’s behind-the-scenes involvement with organizing the conference and inviting attendees. The conference proved fruitful for Google: The technology and legal experts present supported Google’s position, arguing their points in front of regulators who would later determine that there was no hard evidence of wrongdoing in Google’s changes to their search algorithms.

Google also commissioned a paper by noted conservative judge and antitrust scholar, the late Robert Bork, with antitrust professor Gregory Sidak, to help bolster their position. Bork and Sidak wrote, “That consumers can switch to substitute search engines instantaneously and at zero cost constrains Google’s ability and incentive to act anti-competitively.”

In addition to their attempt at shaping public and media discourse on the subject of their search practices, Google continues to try to influence decision-makers through monetary donations to companies and individuals that will support them. In the first quarter of 2015, Google spent approximately $5.47 million lobbying a select group of legislators on subjects such as privacy and competition issues in online advertising, openness and innovation in online services and devices, and international internet governance.

This last point, international internet governance, is a hot topic for Google as Margrethe Vestager, the European Union’s (EU’s) competition commissioner, this year made a formal complaint against the company for using its dominance to bias Web searches. This complaint marks the first time Google has faced formal charges for anti-trust violations. As an initial response, Google defended its business practices in a blog post, stating: “While Google may be the most used search engine, people can now find and access information in numerous different ways—and allegations of harm, for consumers and competitors, have proved to be wide of the mark.” Vestager’s recent charge isn't Google’s first encounter with the EU—they’ve been under investigation since 2010 for anti-trust violations in the European market for promoting their products at the expense of their competition. The EU’s first three-year investigation of them ended in February 2015 with Google agreeing to “make concessions on how they display competitor’s links.”

Currently, the beleaguered giant also faces charges by Indian investigators who sent concerns regarding anticompetitive practices and search dominance to Google’s headquarters last week after a lengthy three-year investigation. They intend to pursue the matter formally, pending further fact-finding.

So, is Google dominance a reality or just a smoke-and-mirrors attempt by competitors and governmental agencies to slow down the company’s explosive growth? According to Investopedia, a monopoly is a single company or group that owns all or nearly all of the market for a given product or service. While Google isn’t the only company that provides search services, it does have the lion’s share of the market, giving it the ability to manipulate search results that can indeed hinder competition.

That, coupled with Google’s sometimes strong-armed tactics with competitors, makes it suspect. In the case of Yelp, a review service that has a strong following in the restaurant category, Google used its fiscal power to attempt to purchase the successful service outright. When Yelp turned down Google’s offer, the company responded first by buying competitor, Zagat, and then by “borrowing” Yelp results to support Google’s local search results content. They also created programs such as City Experts (recently replaced by Local Guides) to build a network of reviewers and experts that can serve local areas in much the same way as Yelp. In another move to swipe market share from local review competitors, their Local Carousel, a series of images and ratings that pops up during local information searches, is positioned to grab consumer attention and shift it away from organic search results in favor of the Google product.

Detractors have long pointed to Google’s ever-changing algorithms as a means of supporting the case that Google is manipulating search results on purpose and for its financial gain. Google may be doing this, but even if it is, these actions are protected under the auspices of free speech. Google has long argued that, just as an editor chooses which stories to print or not, and which to put on the front page, Google’s algorithms edit what the consumer sees. This type of editorial control is protected under the First Amendment, regardless of how the results are shown.

Even though Google grew its enormous market share fairly through exceptional service and cutting-edge technologies, it is still possible, and even easy, to access the internet without using its services. The open architecture of the internet gives consumers direct access to websites without using a search engine. Web browsers provide customizations that allow content to be accessed sans search engine, and mobile apps have proliferated as a new way of searching for needed content. If Google manipulates its algorithms to support its products, to the consumer it is not that different from watching a news channel that gives fuller coverage to stories that supports its political viewpoint, or a magazine that publishes advertisers with whom it has partnerships.

Regardless of how you view Google’s alleged dominance of the market and whether or not you agree with their supposed manipulation of search results, it is clear that there is a long way to go in defining the boundaries of online super companies. Their structure and essence are clearly a challenge to conventional thought about monopolies since current antitrust legislation is focused on businesses that developed during an industrial age rather than an informational one. Internet superpowers like Google, Amazon, PayPal and Facebook have an opportunity to rewrite the definition of fair competition for a global online community.

What is evident from the controversy surrounding Google’s practices is that there is significant confusion over what constitutes anticompetitive actions in the online world. Our current scramble to adjust to a post-industrial economy puts the cart before the horse by waiting for concerns to arise before addressing them. To rectify this, scholars, business people, legislators and consumers should unite to come to a mutually agreeable understanding of issues faced exclusively by online businesses. This understanding should encompass not only best business practices but also exhibit a sensitivity to the difference between competitiveness online versus in a brick-and-mortar world. A litmus test for anti-competitiveness constructed without a specific business in mind (e.g. Google) would be an excellent first step toward a protective antitrust policy geared toward the information age.

 

Nikki Williams

Bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at gottabeewriting.com

Bridging the Digital Divide

 

With computer literacy becoming an increasingly important skill in college and the workforce, middle schools and high schools across the nation must prepare students to meet modern expectations. This is a major challenge for underfunded districts where money is sparse and the costs of equipment, high-speed Internet and training seem out of reach. But the digital divide, the gap between those who have access to computer technology and those who do not, will not go away without school investment. Although funding for critical programs is often at stake, school district representatives must decide whether it is their ethical responsibility to integrate computers into their curricula. Due to the growing digital access gap, the answer to that question should be a resounding ‘yes.’

The myriad costs associated with providing classroom computers prevent many budget-strapped administrators from adopting a new, technology-focused teaching approach. It is understandably difficult for school administrators to ask teachers to utilize digital tools if investing in technology means putting off faculty raises. Unfortunately, the digital gap will continue to widen if it is not addressed, and administrators need to prioritize the use of modern equipment.

Nice computer labs are no longer the bar in most middle schools and high schools. It is the norm for teachers to incorporate technology into the classrooms, allowing students to participate in lessons by working on computers individually or in groups. Many schools are also transitioning to one-to-one (1:1) curricula wherein all children are assigned a laptop or tablet they can use in the classroom or at home. To truly prepare students for the road ahead, schools must move beyond labs and into teaching spaces, encouraging instructors to assign tasks that involve computers.

Long-term training is the most important outcome of computer-rich learning environments. But digital investments also pay off in the short term, as teachers benefit from organizational tools, immediate feedback and ‘differentiated learning’ applications. For example, teachers can customize lessons to meet each student’s level of understanding by asking the class to watch videos or complete assignments and answer questions on their own. Depending on their speed and results, software and online applications will provide students with new content, give them time to complete work or offer review assistance on confusing sections. Accounting for the individual needs of students is much more difficult when one message applies to a diverse classroom.

Low-income schools need more than equipment to close the gap, so it is crucial that administrators introduce digital learning practices quickly. In addition to computers, they must provide high-speed Internet that can handle modern digital requirements. Schools need significant bandwidth for students throughout the school to go online, participate in interactive assignments and watch videos at the same time. With fewer than 20 percent of educators believing their schools offer Internet connections that satisfy their scholastic needs, connection is a problem, especially in rural areas where Internet Service Providers offer few affordable high-speed options. According to the Federal Communications Commission, 41 percent of rural schools could not obtain high-speed connections if they tried.

The federal government has stepped up and made significant strides to help underprivileged schools obtain high-speed Internet and digital learning tools. In 2013, President Obama introduced the ConnectEd initiative to provide teaching assistance and high-speed Internet to schools and libraries across the country, paying particular attention to rural regions. He lauded North Carolina’s Moorseville Graded School District and its Superintendent Mark Edwards for adopting a digital curriculum despite limited resources. Several years after Moorseville schools provided a device to each student in grades 3-12, their graduation rates increased by 11 percent. Although the district ranked No. 100 of 115 in terms of dollars spent per student it had the third highest test scores and second highest graduation rates.

Investing in eye-catching technology and updating curriculums is not easy for all districts. The combined costs of new equipment, high-speed Internet and teacher training are difficult to cover when schools have other issues to address. As Kevin Welner, a director of the Education and the Public Interest Center at the University of Colorado at Boulder stated: “If you’re at a more local level trying to find ways to simply keep from laying off staff, the luxury of investing in new technologies is more a want than a need.”

This is an understandable problem, but seeing technology as a want rather than a need is an outdated mindset. School districts should make digital curricula a priority rather than a novelty. When budgets cannot be rearranged, schools can join together to purchase equipment at reduced bulk pricing and seek funds from communities and businesses. Major companies, such as Best Buy, offer direct assistance to schools while various nonprofits partner with corporations to secure academic resources.

Before they decide to put off investments in digital education, district superintendents should remember that they face numerous obstacles as they work to close the digital divide. Underprivileged students who are not exposed to digital education in the classroom are likely to be hampered by limited equipment and Internet access at home. According to the Pew Research Center, approximately one-third of households with annual incomes under $50,000 and children between the ages of six and 17 did not have access to high-speed Internet. Any computers help, but schools are the primary point of exposure for many students, making it particularly important to have well-equipped classrooms and trained teachers.

Occasional investments in computer labs only scratch the surface of the problem; so sporadic splurges offer limited results. To bridge the divide, district representatives must dedicate themselves to bringing computers into the classroom. Those who prioritize other issues are not being fair to today’s students. Los Angeles Unified schools Superintendent Ramon Cortines made headlines in February when he reversed his predecessor’s popular promise to give each student, teacher and school administrator an iPad. “We've evolved from an idea that I initially supported strongly and now have deep regrets about,” he stated, adding that a more balanced approach to spending was necessary. Ultimately, both the iPad initiative and faculty raises were put off, and, more importantly, one more class of students failed to receive sufficient digital training.

Integrating technology needed to bridge the digital divide should be a priority for administrators in all middle schools and high schools. Despite the numerous financial problems in low-income areas, school and district administrators must realize that digital curriculums are an ethical priority, even when it means putting off other school problems and seeking out outside revenue. When you’re on the wrong side of the gap, digital access is a major burden, and schools need to chip away at its ongoing cost.

 

Paulina Haselhorst

Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.

Trusting free speech online

 

There is an instinctive, basic trust within all of us, something that is essential to the ease with which we move through our daily lives. Think for a moment about what you trust. I trust that the alarm will wake me on time each day. Making my morning coffee, I trust that the utility companies continue to provide electricity, that what is labeled as coffee is genuinely coffee, and that the cup will not break. Think of the sense of utter betrayal when one of those conditions is not fulfilled! I still remember rushing to get up when a power failure reset the alarm, breaking my favorite mug on my 18th birthday, and the time the good coffee ran out on the morning of a big meeting.

As a U.K. citizen, I also have an instinctive trust in free speech. Freedom of speech is enshrined in law worldwide under the United Nations Declaration of Human Rights, and upheld in varying degrees nation by nation, from the First Amendment to individual enactments of human rights legislation. Because free speech is such a basic right in the U.K and United States in particular, we assume that people exercise this right in order to express genuine fact and opinion. It’s rare to question free speech. Perhaps it’s time we started.

Leaving the house each morning, I place my trust in the manufacturer of my car to fulfill their promise of comfortable, safe, and legally compliant vehicles. Until very recently, there was a general consensus among car buyers that Volkswagen and their partners fulfilled and even exceeded their duty of care towards our environment – certainly the messages put out there by the manufacturers, and by apparently independent free speech sources across all channels, reinforced this impression. However, as reported across the globe on Sept. 18, 2015, including the New York Times article, “VW Is Said to Cheat on Diesel Emissions,” we were misled. This came as a huge shock to consumers: trust had been misplaced. The trust in the brands involved was based on a perception fed by clever marketing, promotion, and perpetuation of an image across social channels. Perhaps discovering that all was not what it seems should have raised questions within all of us about the veracity and independence of that font of all knowledge, the internet. However, our need to function day to day within a trust framework means that questioning free speech is not a reflexive response.

A balanced view of the world?

How do we come to trust? From babyhood, we look at the evidence around us, and learn from our own experiences. We observe the reaction of our peers to situations, and follow their lead. We have an instinct for self-preservation, which helps us to place more emphasis on evidence that seems to be balanced and fact-based, naturally fearing overt coercion. Ultimately, we trust free speech, and the birth of the World Wide Web gave us access to reams of freely given information for decision making in our daily lives. However, when you step back, can you really say that all the memes, clickbait, selective reporting and freely-given opinion is truly balanced, factual, and evidence-based? We can’t always trust free speech on the World Wide Web.

As creator, Tim Berners-Lee said, when writing about the Web at 25: “When we link information in the Web, we enable ourselves to discover facts, create ideas, buy and sell things, and forge new relationships at a speed and scale that was unimaginable in the analogue era. These connections transform presidential elections, overturn authoritarian regimes, power huge businesses and enrich our social networks.” This explosion of information, coupled with the ability of every internet user to become an armchair philosopher, scientist, politician, or sports coach, starts to ring alarm bells if you step back and succeed in suspending your instinctive trust in free speech. Our natural leanings towards trusting our peers can backfire in all kinds of ways, in all areas of life.

The Escher Group in North East England conducted a detailed study last year into the habits of small businesses when they seek advice and support. As these micro enterprises are the backbone of the U.K.’s economic revival, and there are hundreds of public sector-led initiatives in place to help them, it’s important that they access those resources to thrive and survive. However, Escher’s results showed that 98 percent of respondents do not trust the public sector to help them with their business; the first point of reference is usually their peers. Think about your own first port of call: it’s a natural human reaction to ask the people you think will empathize with your problems. Unfortunately, this means that solid, verifiable business advice isn’t always filtering down to the people who need it. The noise added by personal opinion, anecdotal evidence, and online publication of unverified documents full of inaccuracies, is a problem that needs to be addressed. Ultimately, free speech is trusted over ‘official’ information because of a perceived lack of empathy, to the detriment of all.

Sharer beware!

Although Tim Berners-Lee goes on to say that: “social networks are interesting …they give us a custom view, a manageable and trusted slice,” this is the trust at the root of the perpetuation of internet hoaxes. I regularly find myself pointing friends and family to references on SnopesThatsNonsense and other sites when they unthinkingly share a dramatic, but uncorroborated meme, which seems to align with their own views. (For other useful ways to clean up your friends’ social feeds, check out Pete Brown’s comprehensive guide published in Australia’s The Conversation – Six easy ways to tell if a viral story is a hoax.)

They are exercising their freedom of speech rights by sharing hoax memes: that is to say, they are expressing what they believe, and they have the right to do so whether the reader finds it distasteful or not. However, it’s the detail behind what they share that is of concern. Our freedom of expression may be compromised by the veracity of the content we share.

The regulation dilemma: who polices free speech?

The Arab Spring demonstrated the powerful, positive use of the web to spread messages and “overturn authoritarian regimes” as Berners-Lee describes. However, we are now seeing the powerful, negative use of the Web as the Daesh movement (ISIS) overturns democracy in favor of its own brutal, authoritarian regime. The Brookings Institute published a study earlier this year of Daesh social media activity, identifying at least 46,000 Twitter accounts firing out around 100,000 tweets a day. There is fighting both on the ground and in the digital space, as the propaganda war is waged alongside real bloodshed. Even the argument over the movement’s name is telling – there is a push to remove the pseudo-authoritative title of ‘Islamic State’ used in the West, in favor of Daesh, as it is referred to in the rest of the world.

The internet is the home of free speech, but there are conflicting views and reports: who do you trust? Sharing propaganda is a valid expression of free speech (subject to laws against inciting hatred, of course) and trust in it rests with our individual judgment of the source’s alignment with our values and beliefs. In the U.K. we have seen the decision making that comes from misplaced trust, with families crossing the Turkish border to Syria, while refugees pour out across the same border towards Europe.

Much of the Daesh publicity is sent out as heavy bursts of tweets to build trends, with interaction between the supporting accounts, but very little outside. However, evidence is growing of a far more complex manipulation of free speech online, from an experienced propaganda machine. The recent infiltration of a Russian ‘Troll Factory’ by investigative journalist Lyudmila Savchuk has exposed a more intricate and far-reaching Web of subtle coercion. This activity is in addition to the now-familiar ‘Twitter-bot’ strategies: internet researcher Lawrence Alexander’s study identified 17,650 Russian accounts operating in a similar way to the Daesh machine.

Savchuk’s article in the Telegraph talks of not only phony social media accounts but also blogs, forum participation, and responses to online journalism. She was part of a special unit of “people pretending to be individual bloggers– a fortune teller, a soldier, a Ukrainian man – [and] had to, between posts about daily life or interesting facts, insert political reflections.” Developing a fake source to this level of detail and constructing believable back-stories, reinforces the impression that propaganda is in fact the free expression of independent peer opinion, and strengthens misplaced trust.

The troll factory activity was not restricted to Russia. The Guardian had long held suspicions that its online comments section was being trolled. Their moderators, who deal with 40,000 comments a day, believed there was an orchestrated pro-Kremlin campaign. Once again, this campaign played on our trust by apparently expressing independent reaction to media reports.

Restricting freedom of expression is in the realm of dictatorships and censorship – but does the corruption of freedom of expression merit a system of regulation? The techniques and the intensity of online propaganda are such a concern that in 2013 the European Union set aside $3 million to tackle Eurosceptic trolling in the run up to the European elections. It’s a never-ending battle to present a balanced view; free speech is compromised at every turn. So who decides what is ‘positive’ free speech, and what is ‘negative’? The Brookings study neatly summarizes the problem: “Regulating ISIS per se presents very few ethical dilemmas, given its extreme violence and deliberate manipulation of social media techniques,” the study reads. “However, the decision to limit the reach of one organization in this manner creates a precedent, and in future cases, the lines will almost certainly be less clear and bright.”

The selection dilemma: where do we place our trust?

In the face of such manipulation, where do we place our trust? Research Scientist Peter Gloor’s Collective Intelligence Networks (COINS) theory talks about the ‘Crowd,’ the ‘Experts’ and the ‘Swarm’. Following the experts or the crowd may not give you the right result, but narrowing your sample to reflect your situation, identifying the right ‘Swarm,’ can do so. Similarly, Tim Berners-Lee’s comment that we are comfortable with “a manageable and trusted slice” of information underlines the necessity of finding the right ‘swarm’ to reach an appropriate consensus.

Do we therefore retreat into our chosen communities to reduce the noise? This presents its own dilemma. It’s possible for a population of non-mathematicians to achieve the consensus that 2+2=5. The consensus diverges from mathematical principles; it has been corrupted by false assumptions. The consensus reached by families travelling towards Syria diverges from the reality that is causing millions to flee; it has been corrupted by online propaganda. Philosopher Jürgen Habermas, a proponent of consensus reality as a model of truth, refers to the ‘ideal speech’ situation where there are no external and coercive influences. Can you restrict your community to eliminate those influences, reaching a reliable consensus among knowledgeable peers – or does this selection corrupt, implicitly? Free speech may be unreliable in a self-selecting community.

Regulation, selection – or skepticism?

So where does the answer lie? Regulation comes with extraordinary volumes of ethical baggage: who watches the watchmen? Selection has its place for expert consensus, but who decides the makeup of the community? Ultimately, I believe we all have a responsibility to champion our rights to free speech, while suspending our instinctive trust and exercising a healthy level of skepticism.

 

Kate Baucherel

Kate Baucherel BA(Hons) FCMA is a digital strategist specialising in emerging tech, particularly blockchain and distributed ledger technology. She is COO of City Web Consultants, working on the application of blockchain, AR/VR and machine learning, for blue chip clients in the UK and overseas. Kate’s first job was with an IBM business partner in Denver, back when the AS/400 was a really cool piece of hardware, and the World Wide Web didn’t exist. She has held senior technical and financial roles in businesses across multiple sectors and is a published author of non-fiction and sci-fi. Find out more at www.katebaucherel.com

The Ethics of Social News Gathering

 

Though user-generated content (UGC) is now a standard part of the journalist’s newsgathering toolbox, the ready availability and proliferation of such content—and our bottomless appetite for breaking news—introduces a familiar ethical dynamic in the modern newsroom. However, rather than inhibiting the journalistic project, these evolving ethics of social newsgathering may also empower the voice of the raucous court of public opinion, which has historically kept the press in check.

The increased proliferation of UGC in news media is a byproduct of our culture’s increasingly symbiotic relationship with social media. The best social media platforms allow for a customized newsfeed that effectively blends current events with status updates, food pictures, and smarm. In other words, our ubiquitous newsfeed is a digital age, individualized evolution of the broadsheet. Although within these feeds there is enough context to distinguish a CNN headline on Syrian refugees from pictures of your aunt’s vacation to Denver, the fact that these two messages—one ‘newsworthy’ and one social—share the same visual real estate belies an emerging precedent and expectation for UGC and news media: The news must be relatable, relevant, and—most importantly—it needs to come from someone like us, not them.

This much is confirmed by the Online News Association (ONA), which last year was the first group to formally engage with the ethics of this new paradigm in the two-part article, “Social newsgathering: Charting an ethical course.” Regarding newsgathering, the authors write, “You’d better leave lots of room for social tools, given the powerful role social newsgathering now plays in discovering important information and content, especially when news breaks where there isn’t a professional journalist in sight.” On one hand—depending on who you talk to—the increasingly powerful role of social newsgathering has put professional journalists in a precarious position (just ask anyone who used to work for the Chicago Sun-Times). But on the other hand, UGC allows us to see, hear and learn things we might have otherwise only gathered through hearsay back in the days of print. There are certainly rightful grounds for the news utility of UGC (and the corresponding reduction of overhead a la the Sun-Times).

For its first look at the ethics of social newsgathering, Eric Carvin and Fergus Bell of the ONA—both social media editors at The Associated Press—formed a social newsgathering working group who cooperatively identified the five key ethical challenges of social and digital newsgathering. These challenges include verification and accuracy, contributor safety, rights and legal issues, social journalist wellbeing, and the less obvious issues of workflow and resources (i.e. “How does a newsroom with tight resources develop the expertise to make strong ethical decisions about social newsgathering?”). While these are worthwhile exercises, in many ways they simply restate the ethical challenges faced by journalism since the dawn of free press: Can these facts be verified? Will someone be put in harm’s way if this information is shared? Are we breaking any laws? How do we know if what we’re doing is the right thing?

These and other ethical questions are, of course, essential for good journalism. But what these conversations leave out is that, despite the nobility of such ethical pontification, they take place under the umbrella of commerce that is at once amoral and discriminating. You can’t run a profitable paper without selling a few ads. And isn’t that the whole reason ‘news’ exists, anyway—to make money? Idealists say no, but without the news media industry, there would be no news—and without financing, there would be no news media. But the nature of UGC and the tendency of the digital age to subvert once privileged news access undermines the business precedents—and associated ethical contexts—of traditional news media. It’s a double-edged sword.

One recent corrective on UGC and the ‘business’ of news comes again from Fergus Bell who, in a keynote address at the news:rewired 'in focus' conference this past October, said: “There are ways to be competitive and ethical at the same time. I think that it requires the industry to work together. There are certain standards that we can come to – just because this is new, it doesn’t mean that we can’t get together and talk about it.” As Alli Shultes reports at Journalism.co.uk, the focus of the conversation on UGC and journalism should be sustainability. For Shultes, this means “building confidence so that newsrooms and journalists continue to be trusted to handle UGC in an ethical and professional manner.” But it also means sustaining business, audiences and reliable reportage, which is the essential product of the news media industry. On a basic level, Bell has a method for fostering this sustainability via UGC:

- Find the earliest example

- Check the source’s history

- Ask the source about the information/image

- Verify the source

- Secure permission for the AP to use it

- Compare the content (with other images that might date it)

- Verify the content

Because UGC is now a journalistic standard, and because the presentation of this standard is both received and combined with our personalized, microcosmic social news, Bell’s ethical methodology has value for journalists and newsrooms as well as for the users themselves. The ubiquity of UGC means that users are now more explicitly contributing to the news generation and collection process, whether we know it or not. A haphazard tweet or Instagram can be front-page news, despite the generally self-serving intentions dominating our online lives. In this way, there is at least a baseline moral incentive for users to be aware of the ethical dynamics of newsgathering, especially as the lines between laity and industry continue to blur.

At one time, the hard line between the public and the press—while perhaps more beneficial to business—nonetheless gave the public a certain power over the press. If we don’t like what you’re printing, we’re not going to buy it. And if we really hate it, we will boycott it or do everything we can to put you out of business. A democratic system that values free speech grants sway to the majority; this is the Court of Public Opinion. And as the digital conversation of social media has empowered the voice of the laity, it has likewise changed the dynamics of this Court. The UGC that is changing the dynamics and ethics of newsgathering and journalism is (or can be) the same content that serves to hold the press accountable. And in situations where the press is state controlled, social media as UGC even has the capability of inspiring revolution (as in Egypt in 2011).

With UGC now a newsroom standard, it’s important to update the ethics of newsgathering and journalism. By that same token, it’s perhaps even more important to understand the commercial context from where these ethics emerge. For UGC has not only transformed the new gathering process but also the industry, which has historically commanded the mechanisms of this process. In this new milieu there emerges a responsibility that both journalists and ‘users’ would do well to apply. It makes for better news, reliable content, and—even though we hate to say it—a better bottom line.

 

Benjamin van Loon

Benjamin van Loon is a writer, researcher, and communications professional living in Chicago, IL. He holds a master’s degree in communications and media from Northeastern Illinois University and bachelors degrees in English and philosophy from North Park University. Follow him on Twitter @benvanloon and view more of his work at www.benvanloon.com.

European Union Authority Releases Opinion on Privacy and Ethics

 

On September 11, 2015 the European Data Protection Supervisor (EDPS) Giovanni Buttarelli released an Opinion about digital privacy and dignity entitled Towards a new digital ethics. The document was published in an effort to encourage open discussion about privacy concerns facing the European Union and to emphasize that regulations should focus on preserving human dignity. The Opinion outlines technologies the EDPS believes to pose the greatest threat to privacy and discusses the entities responsible for preventing infringement. It also announces his plans to create an Ethics Board responsible for analyzing the ethical effects of defining and using private data.

The EDPS is an independent supervisory authority who was appointed by the European Parliament and Council in 2014 to advise the European institutions and bodies on privacy legislation and cooperate with authorities to ensure that personal data is protected. Increasing concerns about the proliferation of privacy-threatening technology have driven the EDPS to release a statement about the association between ethics and digital security. This Opinion is a follow up to the EU Data Protection Reform, and further opinions are expected as the EDPS works on his five-year strategy to constructively improve and monitor data security.

Much of the Opinion outlines modern digital technology the EDPS believes to pose the biggest threat to preserving private information in an ethical manner. In addition to examining current innovations, Buttarelli extends his research to analyze the potential for security breaches in up-and-coming developments. Big data, the Internet of Things (IoT), cloud computing, data business models and autonomous devices were of particular interest. The EDPS expressed his concern that personal, sometimes inaccurate, information collected using these technologies is being used to establish profiles that undermine our dignity and make us vulnerable to discrimination.

Big data, the process of collecting large sums of data from various sources and processing it with the help of algorithms, poses a significant obstacle to preserving privacy. In addition to the better-known use of big data–personalized advertising–the process is also used for more impactful purposes such as determining insurance rates and loans. The EDPS worries that business models that rely on summarizing a person by pulling information, especially from sources unknown to the individual, undermine their dignity. Among Buttarelli’s goals is curtailing the practice of reducing people to data, and establishing steps to regulate big data is a part of the process.

Another trend that warranted attention was the Internet of Things (IoT), networks of devices that remotely collect and exchange information. Some gadgets that make up the IoT network are extremely beneficial to human wellbeing, but their services rely on private information. Wearable health monitoring devices and accident prevention technology that fall under the IoT umbrella can potentially save lives, but they also rely on gathering, storing and transferring personal information. Advanced collection technology, such as heat sensors and authentication applications, are often used in combination with cloud storage to carry out IoT processes. The collected data may include IP addresses, passwords, health conditions and physical locations; details that must be utilized in an ethical way.

The Opinion suggests that obtaining access to such data puts users at risk of stereotyping, especially in the health and auto industries. Nevertheless, the EDPS maintains that it is possible for security checks to protect the dignity of the public without stifling remarkable innovation. He plans to bring experts together to discuss what steps can be taken to guard data collected by IoT devices without hindering their usefulness.

While he did not offer sample legislation to counter threatening trends, the EDPS did identify technologies that warrant further analysis. The Opinion served primarily as a jumping-off point for further, more specific discussion about security regulation. Equipped with information about technologies of concern, business leaders and IT technicians could work together with legislators and privacy experts to propose ethical solutions.

In addition to highlighting areas of concern, the EDPS identified parties that must be held accountable for securing privacy measures. He indicated that an ‘ecosystem’ made up of legislators, corporations, IT developers and individuals was responsible for maintaining ethical privacy standards.

Not exactly known to shoulder the responsibility for company ethics, IT developers were challenged to seek out solutions to digital privacy concerns. In particular, developers were asked to implement personalization tools to safeguard private information in devices and networks. According to Buttarelli, technological design decisions should, “support our values and fundamental rights.” He suggested that further research on privacy and auditing technology would play a role in achieving these goals.

Businesses that utilize private data were naturally directed to use such information for necessary functions only. This was the most challenging of directives, as utilizing personal content in a variety of ways is often a part of company business models. It will be interesting to see whether discussions the EDPS hopes to initiate will produce realistic alternative profit models and suggestions for circumventing personal data usage.

Nevertheless, Buttarelli stressed that businesses should be using private information to meet clear objectives. He also called on them to enforce strict and clear auditing procedures that involve oversight by independent regulators. The EDPS suggested that corporations implement auditing regulations, introduce audit certifications, and set up company codes of conduct.

Without fear of repercussions, it is unlikely that companies will make privacy a top concern, especially in cases where it interferes with profits. That is why legislators were named as a part of the ecosystem responsible for ensuring information security and personal dignity. It is worth noting that EU laws already prohibit the use of information in unlimited ways, even in cases where individuals offer full consent. Therefore, EU legislators have already set in place basic data protection regulations they can build upon after more direct EDPS protocols are proposed.

Buttarelli asked that IT developers, businesses and legislators ensure privacy and offer clear guidance to those who do not understand how their data is collected and used. But he did not ignore the responsibility of individuals to monitor their own behavior and make sure their information was utilized correctly and gathered properly. According to the EDPS, “individuals are not passive beings requiring absolute protection against exploitation.” He offered sample research that suggested misinformation was not uncommon in credit reports, and directed individuals to challenge questionable results that might lead to discrimination. Consumers who were unsatisfied with corporate services could pressure businesses to step up by shopping around and purchasing from more reputable companies.

The parties in Buttarelli’s ecosystem each have unique motives for managing private information, but according to the EDPS, one factor must underlie all of their goals: human dignity. It is difficult to create a one-size-fits-all plan for processing private data because future technology is not exactly easy to police. Despite his thorough analysis of trends that may evolve into privacy threats, regulations for the unknown will not address all imminent problems.

What can guide leaders, developers and the public in maintaining ethical approaches to data privacy is ensuring that new technologies and regulations uphold an individual’s dignity. In other words, producers and processors of technology should regularly ask themselves whether they are using private data in ways that can lead to stereotyping, stigmatization or exclusion. They should also consider whether personal data is used as a profiting tool rather than an imperative aspect of operation. If dignity is compromised in exchange for technology, we should debate whether that technology is worth the price.

Despite his concerns over unethical use of private data, the EDPS remained optimistic about the EU’s ability to preserve privacy without stifling innovation. He challenged developers to create technology that limited the ability to single out individuals and to concentrate on methods of collecting unidentifiable data, assuming the private information was necessary in the first place.

To help him evaluate how a balance between human dignity, innovation and business models can be achieved, the EDPS announced he would establish an Ethics Advisory Board in the coming months. The board will include several experts in the fields of technology and economics. Given the board’s focus on ethics, members will also include specialists who can provide information about the social implications of privacy risks. Among them will be sociologists, psychologists and ethics philosophers. When needed, additional authorities will be invited to weigh in on solutions to privacy obstacles and their ethical compromises.

An ethics board proposal and the list of threatening technologies and responsible parties were all laid out to establish a framework for further discussion. The Opinion of the EDPS echoed his belief that personal dignity did not have to hinder innovation as long as the EU challenged itself to come up with solutions that prioritized dignity in technology. According to the EDPS, now is a prime time for the EU to adopt a fresh approach to handling private data in an ethical manner. Although technological trends are not all predictable, proactive collaboration between experts and the public can ensure that security and dignity will become an integral part of future developments.

 

Paulina Haselhorst

Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.

 

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

 

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World.

by Bruce Schneier

Review by: Owen King

We constantly interact with computers.

Computers generate data.

Data is surveillance.

Surveillance curtails privacy.

Privacy is a moral right.

This combination spells trouble, and we better think about how to deal with it. This, in essence, is the message of Bruce Schneier’s book, "Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World", published earlier this year.

Schneier is a well-known expert on computer security and cryptography. But "Data and Goliath" is not a technical book. Instead, it presents an analysis of the social and ethical implications of new information technology, and unabashedly offers prescriptions for reform. Though Schneier’s analysis is both striking and informative, his prescriptions and justifications are less compelling. Nonetheless, even these latter elements serve Schneier’s overarching purpose of advancing the conversation about the prevalence of surveillance and the role of big data in our lives.

"Data and Goliath" has three parts. Part One, “The World We’re Creating,” is about the state of information technology and how governments and other large organizations use it to monitor us. Part Two, “What’s at Stake,” articulates values, including liberty, privacy, fairness, and security, and argues that the technology described in Part One threatens each of these. In response to these concerns, in Part Three, “What to do About It,” Schneier lays out his prescriptions for governments, corporations, and individuals.

Part One is the most successful of the three. Schneier depicts current technology and its capabilities in a way that should impress any reader, even those who understand the technology only superficially. The crucial idea is that the massive data sets involved in what is now known as “big data” are the quite natural, perhaps inevitable, result of computation, and these data sets constitute a practically unlimited source of personal information about us. This data is generated as we interact with electronic devices that are now completely ordinary parts of our lives: our phones, our televisions, our cars, our homes, and, of course, our general-purpose computers. This occurs with every tap on a mobile phone app, every web page request, and so on; all these interactions create data. Hence, Schneier urges us to think of data as a by-product of computing.

Regarding the potential usefulness of big data, Schneier remarks, “That’s the basic promise of big data: save everything you can, and someday you’ll be able to figure out some use for it all.” Combining this remark with the point about data as a by-product of computing, we might formulate an especially informative definition of big data as follows: Big data encompasses the large, continually growing data sets that are a by-product of computation, along with the methods and tools for collecting, storing, and analyzing them, such that the data can be used for purposes beyond those that guided its collection. This conception, implicit in Schneier’s discussion, answers both the why? and the for what? of big data. In contrast, the standard definitions of big data put in terms of “Three V’s” or “Four V’s” or however many V’s merely list the general characteristics and uses of big data systems.

This puts us in a position to see why big data is such a big deal. The data constantly rolls in, and with some creativity, along with the possibility of joining data sets from different sources, it can shed light on the minutest details of our lives. (Schneier provides many clear examples of this.) The attractions for intelligence agencies and corporate marketing departments are obvious, and we are increasingly living with the results. Over the six chapters in Part One, Schneier describes the developments of the consequent big data revolution in gruesome and captivating detail. Most readers will come away convinced that their lives are far less private than they thought. Reading these chapters would be worthwhile for anyone with even minimal curiosity about the political, economic, and social effects of technology.

Part One is likely to induce at least a vague feeling of fright or anxiety in most readers. Part Two attempts to justify this uneasiness by explicitly appealing to ethical principles and values that big data seems to threaten. A highlight is the chapter called “Political Liberty and Justice,” which emphasizes the so-called “chilling effects” due to ubiquitous surveillance. Schneier compellingly explains how constant surveillance may dissuade us from engaging in some of the morally permissible and, in many cases, legal activities we would otherwise choose. Surveillance thus inhibits us, effectively reducing our liberty. As Schneier recognizes, this was the idea behind philosopher Jeremy Bentham’s famous panopticon—a prison designed to ensure compliance and conformity through (at least the appearance of) constant surveillance. Other useful observations come in the chapter on “Commercial Fairness and Equality,” in which Schneier points out ways in which surveillance through big data facilitates discrimination against individuals or groups.

Unfortunately, the weakest chapters of Part Two are those on which the most depends—viz., the chapters respectively entitled “Privacy” and “Security.” In order to justify his eventual prescriptions for limiting the collection and use of big data, it is crucial for Schneier to show that current big data policies are incompatible with a valuable sort of privacy, and furthermore that the losses in privacy due to big data are not outweighed by the increased security it helps provide. Schneier’s book falls short of a convincing case for either of these claims.

Schneier’s treatment of privacy is provocative, but it will likely be unconvincing to anyone not already on his side. Schneier’s view is that surveillance constituted by massive data collection—regardless of how the data is eventually used—is a serious privacy violation and, hence, constitutes harm. But Schneier does not theorize privacy in enough depth for us to see why we should agree.

Unsurprisingly, there already exists an extensive literature—spanning law, philosophy, and public policy—on privacy and information technology. Though much of that literature is not informed by the level of technical knowledge Schneier possesses, it offers some theoretical nuance that would have helped Schneier’s case against surveillance. If bulk data collection by computers is indeed a privacy violation, it is quite different from, say, an acquaintance listening in on your phone calls. Some state-of-the-art work on privacy, which dispenses with the public/private dichotomy as a tool of analysis, would put Schneier in a better position to address this. For instance, Helen Nissenbaum’s theory of privacy as contextual integrity understands privacy concerns in terms of violations of the various norms that govern the diverse social spheres of our lives. Such a theory may provide resources to better distinguish surveillance that is genuinely worrisome from more benign varieties. Schneier pays lip service to Nissenbaum’s idea that privacy concerns depend on context, but this acknowledgment is not reflected in his scantily justified, though adamant, insistence that surveillance through massive data collection is itself a violation of human rights.

Regardless, technologists, philosophers, and other thinkers, all should put more thought into the open question of how our concern for privacy bears on massive data collection practices. It is no criticism of Schneier to say that he has not resolved this issue. However, without more headway here, some of Schneier’s policy prescriptions are less than convincing.

Like his treatment of privacy, Schneier’s discussion of security feels underdeveloped. Schneier’s central claims are that privacy and security are not in tension, and that mass surveillance does little to improve our security. On the former point, he makes some interesting observations about ways in which privacy and security can be mutually reinforcing. Also convincing is his explanation of why designing computer systems to allow surveillance makes them less secure. Furthermore, Schneier nicely lays out a case for the increasingly accepted claim that mass surveillance is not very effective in predicting acts of terrorism. But these points do not suffice to show that pitting privacy concerns against security concerns imposes “a false trade-off.”

Now, Schneier is indeed right to argue that predicting acts of terrorism with enough precision that we can stop them before they occur is nearly impossible. Schneier cites three factors to account for this: First, predictions culled from mining big data have a high rate of error. Second, acts of terrorism do not tend to fit neat patterns. And, third, terrorists actively attempt to avoid detection. We can accept this and still wonder: What about the would-be terrorist who wants to hurt people but also wants to avoid being caught? The more data we have collected and stored, the harder it is for anyone to do anything without leaving a digital trail. Thus, big data enhances our forensic capacities. And, because of this, mass surveillance may have the effect of deterring would-be terrorists. Furthermore, in the event of a terrorist act, a large intelligence database makes it easier to discover any infrastructure—whether technological or social—that the terrorists left behind, and finding this may help prevent future attacks. Whether these benefits are enough to justify the mind-bogglingly extensive intelligence programs revealed by the Snowden documents is a further question. The present point is simply that Schneier has not addressed all of the security-related reasons that might lead one to favor mass surveillance, at the expense of some kinds of privacy.

Arriving at Part Three, we find a laundry list of proposals for reform, all consonant with the ethical outlook espoused in Part Two. And it does read more like a list than like a unified platform. Most of the proposals receive only a page or two of discussion, which is not enough to make convincing cases for any of them. But that is not to deny the value of this part of the book. Like-minded readers (and even many dissenters) will peruse Schneier’s prescriptions with interest, finding in them possibilities worthy of more thorough scrutiny, development, and discussion. This may be just what Schneier intends; those are the discussions he hopes we will be having more often.

Schneier’s list of proposals includes reorganizing the U.S. government’s intelligence agencies and redefining their missions, increasing corporate liability for breaches of client data, creating a class of corporate information fiduciaries, and encouraging more widespread use of various privacy-enhancing technologies, especially encryption. This is only fraction of the list, and, again, many of the ideas deserve to be taken seriously.

Schneier’s proposals prompt us to think in concrete ways about the costs and benefits of big data in the present and for the future. Big data promises huge gains in knowledge, but sometimes at the expense of a sort of privacy Schneier considers indispensable. Despite the many trade-offs, at least one of Schneier’s proposals ought to be a fairly easy sell for most people. That is the push for more widespread use of encryption.

Nothing about the use of encryption inherently precludes the existence of the continuously accreting data sets that characterize big data. If I am shopping on Amazon’s website over an encrypted connection, Amazon can still collect data about every product my pointer hovers over and every page I scroll through. It is just that third parties cannot see this (unless they are granted access). Such encryption is, of course, already standard for web commerce. But, in principle, all of our digital communication—every text, email, search, ad, picture, or video—could be encrypted. Then, at least ideally, only those whom we allow could collect our data. Thus, we gain some degree of control over how we are “surveilled without severely quelling big data and its benefits. Of course, this would make it much harder for intelligence agencies to keep tabs on us, which is why some government leaders wish to limit encryption.

Overall, the strength of just Part One of "Data and Goliath" is enough to make this book worthy of an emphatic recommendation; it offers a stunningly rich understanding of the possible applications of big data and a visceral sense of some of its dangers. The other two parts are also stimulating, and they provide a helpful starting point for responding to the issues raised in Part One. The appeal of the book is broadened further by the extensive notes, which make it a valuable resource for academic researchers.

Of course, books on technology and society rarely stay relevant for long, since the technology advances so quickly. In spite of this, due to its detailed exposition and the pointed way it frames choices about our relationship to big data, Data and Goliath should be quite influential for the foreseeable future.

 

Owen King

Owen King is the NEWEL Postdoctoral Researcher in Ethics, Well-Being, and Data Science in the Department of Philosophy at the University of Twente. His research is primarily focused on well-being, from both theoretical and practical perspectives.  He also investigates ethical issues raised by new computing and data technologies.

 

Crowdsourcing and Campaign Reform

With the 2016 presidential elections now on the horizon, there’s no escaping ‘the campaign’—whomever’s that might be. The proliferation of political advertising across all media, from billboards to iPhones, subjects the voting public to a constant barrage of targeted advertising. The strongest campaigns, in an attempt to master cross-platform messaging, have been making innovative strides in marketing and media, which certainly adds a sense of novelty to an otherwise staid political contest. One example is the gradual introduction of digital crowdsourcing into the political process. On one hand, crowdsourcing, as a novelty, is an essentially democratic notion, though beyond the surface, it is still exclusionary by nature. As many see it—including attorney, activist, and Harvard Law Professor Lawrence Lessig—the tricks of the contest may be new, but the game is the same…and it’s only getting more expensive to play.

Consorting with Black Hats and Negotiating with Cybercriminals: The Ethics of Information Security

 

October is National Cyber Security Awareness Month, and the fact that it’s a month instead of a day speaks volumes about the growth and prevalence of cyber crimes.

Google Dominance or Information Age Misunderstanding?

 

Google. A brand so enormous, it’s a verb. We endlessly Google, but we never Yahoo or Bing. And many of us have never even heard of competing search engines like Gigablast, Yandex, or Qwant. This enormity is the problem—the massive Google has seemingly shouldered out most of the competition, leaving the remaining few to languish in the shadow of its greatness. Google’s rapid surge from startup to internet superpower has drawn the eye, and ire, of competitors and legislators alike.

Bridging the Digital Divide

 

With computer literacy becoming an increasingly important skill in college and the workforce, middle schools and high schools across the nation must prepare students to meet modern expectations. This is a major challenge for underfunded districts where money is sparse and the costs of equipment, high-speed Internet and training seem out of reach. But the digital divide, the gap between those who have access to computer technology and those who do not, will not go away without school investment. Although funding for critical programs is often at stake, school district representatives must decide whether it is their ethical responsibility to integrate computers into their curricula. Due to the growing digital access gap, the answer to that question should be a resounding ‘yes.’

Trusting free speech online

 

There is an instinctive, basic trust within all of us, something that is essential to the ease with which we move through our daily lives. Think for a moment about what you trust. I trust that the alarm will wake me on time each day. Making my morning coffee, I trust that the utility companies continue to provide electricity, that what is labeled as coffee is genuinely coffee, and that the cup will not break. Think of the sense of utter betrayal when one of those conditions is not fulfilled! I still remember rushing to get up when a power failure reset the alarm, breaking my favorite mug on my 18th birthday, and the time the good coffee ran out on the morning of a big meeting.

The Ethics of Social News Gathering

 

Though user-generated content (UGC) is now a standard part of the journalist’s newsgathering toolbox, the ready availability and proliferation of such content—and our bottomless appetite for breaking news—introduces a familiar ethical dynamic in the modern newsroom. However, rather than inhibiting the journalistic project, these evolving ethics of social newsgathering may also empower the voice of the raucous court of public opinion, which has historically kept the press in check.

European Union Authority Releases Opinion on Privacy and Ethics

 

On September 11, 2015 the European Data Protection Supervisor (EDPS) Giovanni Buttarelli released an Opinion about digital privacy and dignity entitled Towards a new digital ethics. The document was published in an effort to encourage open discussion about privacy concerns facing the European Union and to emphasize that regulations should focus on preserving human dignity. The Opinion outlines technologies the EDPS believes to pose the greatest threat to privacy and discusses the entities responsible for preventing infringement. It also announces his plans to create an Ethics Board responsible for analyzing the ethical effects of defining and using private data.

 

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

 

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World.