Loyola University Chicago

School of Communication

home_news

Welcome to the Team, Daniel Trielli!

Welcome to the Team, Daniel Trielli!

By Genevieve Buthod

In the “Welcome to the Team” series, we interview newly hired faculty and staff members at the School of Communication. This edition features Assistant Professor of Multimedia Journalism Daniel Trielli. Professor Trielli defended his dissertation for his PhD in Media Technology and Society from Northwestern University’s School of Communication in June 2022. His research focuses on the interaction of search engines with the news, and what accountability that may entail, particularly regarding the results given (or hidden) by search engine algorithms.

What first made you interested in joining the School of Communication at Loyola University Chicago? What are you most looking forward to about this coming academic year?

What I’ve always heard about Loyola University Chicago is that students here are motivated to learn and to change the world. The world is in a challenging time and we need people who want to change it. I’m a firm believer in that world-changing mindset as a key element in journalism. We’re here to make society better. I thought this would be a great place to work because of the incredible research the faculty does, the great students, and the fact that it’s in a city that I love. It matches everything that I wanted.

This is my first year as an Assistant Professor, so I’m interested to meet students and see what their goals and aspirations are. I’m eager to get the ball rolling and talk about the topics that interest them. I have techniques that I teach about data journalism. The good thing about that is that there is data about everything, so I want to see what students want to investigate and to help them. I’m also eager to continue my research investigations.

What drew you to the field of journalism? Can you tell me about your time as a journalist at national and local news organizations in your native country of Brazil?

I grew up in Santos, Brazil. Since I was a kid, I always liked figuring things out. It came from the joy I had of not really knowing how things worked, but being able to go and figure it out. I like discovering things. And that’s what journalism is about; explaining mysteries. What do we expect from society, how does it fail, how does it work? Exploration always interested me.

I moved to São Paulo halfway through university. When I was at my school paper, Brazil’s then-president was based in our area of São Paulo. I had the chance to do stories of national interest. Through that, I got an internship at a local news organization, and then I got hired there. I went into editing and copy-editing, and gravitated toward data journalism. Then I moved to a national news organization.

Can you talk about the differences you encountered when making the switch from local news to national news stories?

With local news, we were very connected with the local community. We covered things that were part of daily life. We did stories about potholes and gas prices. When I moved to the metro section, we did things like crime, traffic. It was connected to individuals on the street. When I moved to a national organization, still working in the metro section, it moved further away from the folks in the street. We were talking about national issues, national tragedies. We talked about deep infrastructure problems in our transportation system.

But there isn’t a distinction in the actual job. You call people, you hold people accountable. In a local organization, you’re bothering the mayor. In a national organization, you’re bothering the governor or the president. One of the things that was very jarring to me when I moved to a national organization was how little adjustment I had to do with my journalistic and reporting practices. It’s about listening to as many people as I can, trying to find the data and the story. The practice is very similar, but the stakes are a little bit different.

What interests you about how people find and consume news online? Why do you feel this subject of study is important?

Starting my career, I began in the print tradition. Print dictated how news was made. It was the prime form of journalism. It instilled quality. Now, most people don’t read print newspapers. If they do read a newspaper that started out as print, they don’t read the print version of it anymore. In my career, the journalism industry has changed a lot. The practices of the audiences have changed. That has big repercussions on society, even how we organize ourselves politically. We’re still trying to figure out the real impact of it. Even more so in the last couple of decades, with the rise of these platforms that intermediate how people get their news, we still have to keep them accountable. It has affected how society sees itself and how the news industry functions.

I found your 2019 paper, “Search as News Curator: The Role of Google in Shaping Attention to News Information,” online. I was particularly interested in your findings about source concentration. Section 4.1.1. of your paper states that “The top 20.0% of news sources (136 of 678) account for 86.0% of all impressions…The top three, CNN, the New York Times, and The Washington Post, account for 23.0% of impressions observed.” Could you break this down for a reader who may not fully understand the implications of this finding? How does source concentration affect the consumption of news?

We found through our research that Google as a search engine tends to concentrate a lot of its attention to a handful of sources. The top three news sources in this particular measurement account for a significant chunk of the impressions observed. Almost a quarter of the attention that Google gives stories goes to these top three sources. These aren’t necessarily bad sources; in fact, they’re considered to be great journalistic sources. Still, what makes Google pick these news organizations over others? What are the implications of that market concentration? The economic benefits of being in that top category are worth discussing. Clicks are worth money. That top search result status benefits those organizations over others with clicks and traffic, and therefore money. That concentration might create some disparity in the journalistic market.

Google goes for some measurement of quality, trying to provide relevant results for the searches. But that selection, even if it is benign, has impacts on what kind of information we see and will see in the future. It also has an impact on what journalists are rewarded for writing. It has repercussions on society as a whole. It also has repercussions on the journalistic industry, because it affects what kind of journalism will thrive and survive.

I take some comfort in being able to find the names of the people making editorial decisions about the news I consume. It implies the existence of some level of accountability to the reader. Is it even possible to have accountability when the editorial decisions are made by an algorithm? What does accountability look like in this case?

That accountability is what we are striving to find. How can we keep our news accountable? It has become more opaque. Forget about news—we don’t know whether or not a TV show is a hit anymore. Netflix just tells us it’s a hit, and we have to trust that. There are fewer ways than we traditionally used to have of knowing how things operate. Researchers are trying to find ways to build accountability. We don’t have access to Google’s algorithms. But we can create methods to input data into it and extract information from that. But then it becomes a self-reflective question: what do we expect? Algorithm accountability is not just about media consumption. It affects so many areas of our lives: what neighborhoods are more policed, where vaccines are prioritized. So many big decisions are made by automated systems that move quickly with a lot of data points. We need to be ready, as a society, to hold those systems accountable. Not just news media, but everyone else too.

I’m very interested in algorithmic accountability reporting. How can we train journalists in algorithm reporting? We need journalists to understand how algorithms affect their regular beat. If you cover education, for instance, you have to understand how algorithms affect which areas receive funding, even down to which software is used in the classroom. If you’re covering health care, you have to understand how governments use these systems to provide public health services. So, the challenge is to train the next generation of journalists on how to cover these things in an informed way.

Do you have the sense that younger students are generally more aware of the way algorithms affect so many aspects of their lives? Is this a new idea to them?

I think younger students are aware that they’re encountering algorithms more and more. I think at the same time, the idea that keeping them accountable seems a little bit daunting. The idea that they are an object to be covered might seem scary or impossible. That’s what I’ve encountered. I think one of the interesting challenges is removing the mystique of algorithms. Really breaking down what we’re talking about. Are all the systems alike? What the heck is machine learning, anyway? I think de-mystifying it all is the first step in keeping them accountable for the next generation. We have to empower them to be able to cover these things.

What steps can laypeople take to empower themselves in their search for news online? Are we forever at the mercy of Google’s search algorithms for the information we can access easily? How can people be more aware of what they are not seeing when they perform a search?

Knowing and checking your sources can help. Journalists have always been trained in this. Who is talking to me? Who pays their bills? Whenever you see information online, you have to check where it’s coming from. See who is writing it. You might see it on Google, but who supplies it to Google? Platforms have different ways of showing that information. Some are more transparent than others. It’s possible with a little bit of work to find that information. There are third parties that create badges of whether something is a good source or not. But then you have to decide how much trust to place in those third parties. We need to keep people informed about how they receive information. The way people learn how to inform themselves doesn’t start in college. It hopefully starts in childhood. In the old days, you would go to a newsstand and choose which paper to get, based on what your parents read or what teams you root for in a specific sport. Now, that is opaquer. You don’t pick a newspaper anymore, you click a link from one curation system. What makes you click that link is a more internalized process that we need to think about.

So how do we combat this issue? How can we think critically when using online search engines?

The whole thing about websites and apps is that they want to make it as streamlined as possible. If you give a smartphone to a baby, the baby will know what to do with it. It’s designed to be easy to use. When it comes to information, particularly in this era of disinformation, there are political actors that rely on people not being properly informed. It’s bigger and faster now than it ever has been. We’re asking people to stop and think in an environment that is designed to be seamless and quick to use. How can we do this in a way that doesn’t completely throw away all of the technological improvements that we’ve made over the years? It’s a big question in how we view ourselves as a society. I found your name in the “About” Section of the fascinating Algorithm Tips Blog. It led me back to the page for the Computational Journalism Lab at Northwestern University. Can you tell me about what kind of work you did or still do with the Lab?

Computational journalism is a field that aims to investigate how computational processes impact the way that journalism is produced and distributed. These are the two big parts of it. The distribution side is what my dissertation was about. But we can also look at, how can it be used for the good of news, on the production side? And we also look at how we report on these issues in the field of journalism itself. The idea is that we generate this database of real-world algorithms that are being used by federal, state, and local governments in the United States. We want to make this database available for reporters to use as a starting point for their investigations, across various areas of coverage.

The way that we create this database is semi-automated. Every week, we scour government websites for specific keywords that we know can help us find algorithms. We generate potential leads that way, we create a list of things that might point to a new governmental algorithm. The database grows every week, and it will continue to grow. And hopefully people will use it.

Can you tell me about what you are working on now, be it research or teaching classes this academic year? What can students expect from your classes this year?

One of the things I’m interested in doing at Loyola is incorporating this database into students’ work. Maybe it will lead to an interesting idea that students can use in a class project. For my research, I have a few ideas. One is continuing doing the algorithmic accountability studies with digital news media platforms. I want to see how that curation of news happens in different places; I want to have a more global perspective. And I want to uncover the values of the platforms. What does Google consider good journalism to be?

In terms of classes, I’m teaching two this fall. One is investigative reporting with a focus on data, where we’re going to go through the methods of developing a data-intensive reporting project. Students will be learning how to request, use, and analyze data. The other is data-powered digital storytelling, for the Digital Media and Storytelling master’s program. This one will focus on how to find stories from data. We have this view that data is math, that it is objective. That’s not true at all. There are gaps; there are reasons why data is collected that may not match our expectations of data. Anyone can extract stories from data. Whether or not you extract a correct story responsibly is the bigger question. I teach techniques, but I am also very mindful of teaching critical elements of that work—why do we do this? What can go wrong in the way that we do this?