News & Perspectives

Journalism, Platforms & Public Good: Susan McGregor Speaks

Journalism, Platforms & Public Good: Susan McGregor Speaks

On authentication and turning technology into a servant of the truth.
Perspective// Posted by: Jeff Greenwald / 28 May 2018

For a bird’s eye view of how technology and journalism are co-evolving, it’s hard to find a better vantage point than Susan McGregor. As Assistant Director of the Tow Center for Digital Journalism (established in 2010) and Assistant Professor at Columbia Journalism School, McGregor helps supervise the dual-degree program in Journalism & Computer Science. Though she teaches primarily in thee areas of data journalism & information visualization, her research extends to information security, privacy, and alternative forms of digital distribution. 

----

Enter: What exactly is the Tow Center?

McGregor: Our objective is to conduct research on the current and near future of digital journalism. We’re focused on what’s happening in news rooms, and what are best practices for a whole range of digital media and digital journalism. We’re not trying to predict what’s going to happen a decade from now, or 30 years from now—we are interested in thinking about two to five years down the line. We’re interested in seeing what’s coming down the road that news organizations should be thinking about.

Enter: How will the process of reporting the upcoming November elections differ from what we saw in 2016?

McGregor: One thing that we are going to see, obviously, is enormously more scrutiny of the messaging from and around the campaigns. Journalists are going to be eyeing social media and ad buying with much more skepticism. Since the 2016 election, but even before that, we’ve seen a decline of certain practices that were relatively prevalent in digital journalism: like using posts from social media essentially as sources, sometimes without following up with those sources in a rigorous way. I think that’s going to be one really big change. On the publishing side, publishers are really moving away from a lot of the platform tools that were out there—things like Facebook Live and Apple News. We’re going to see a lot less of that, and an increase in the kind of algorithmic, accountability-type journalism that scrutinizes how platforms and messaging are happening around the campaign. And we may see a good deal more of very concrete, data driven, transparent journalism as well.

Enter: Do you think there will be any effort to create verification systems— some kind of watermark to ensure the public that their news has been vetted?

McGregor: I do think that, in general, we’re going to see an increase in credibility signaling. We’re going to see things like what ProPublica is doing as a core part of their reporting and publishing process: very detailed and robust methodologies, especially on investigative work. “This is how we did this, these are the choices that we made, these are the things that we looked at, these are the things that we discarded for these reasons.”

Enter: Let’s take it back to the source material. We’re in an age when any sort of video can be doctored in a way that’s almost indistinguishable from the original. Is there any way journalists or reporters in the field can watermark the content they’re producing, so that it can be shown to be authentic, and not a nefarious copy?

McGregor: I actually wrote about this last summer for Columbia Journalism Review about the issue around authenticating digital content in general—whether it’s a news article that you posted online, a video, a photo, etc. The technologies to do that kind of signing, or “watermarking”, have existed for many years. The question is, how do you make that authentication mark useful to your intended audience? Obviously you can put a literal watermark on something. But because—as you point out—the tools for manipulating content are increasingly sophisticated and difficult to detect, how do you make it clear to a reader that this thing is authentic or not? That’s a more systemic and systematic approach that needs to be undertaken. This is something of very great research interest to me, because I think that it is technically feasible. It’s something that I’ve talked about with colleagues in computer security. It’s an infrastructure issue, and it’s also a workflow issue. Again, I can put a digital signature on something, but if my audience can’t essentially “read” that in a meaningful way, how much value does it add? I think that in the short term we have to rely on some simple things.

A really great first step is for news organizations to put their own websites on https connections, so you’re switching to a secure Internet, to a secure domain basically. It’s really simple to do, even if you’re a small organization. You can flip a switch with your service provider to make that happen. And you can take that a step further, if you have a little money. You can do something called “extended verification.” There are lock signals on your website that indicate to users that what they’re seeing when they look at your website is actually the stuff you put there. Without that symbol, it is possible for someone to intercept and manipulate the content that you’re seeing on your browser between the time it leaves the news organization’s server and when it reaches you. HTTPS also supports reader privacy, because it makes it impossible for outsiders to see who is reading what. They can see that you went to the Wall Street Journal, the New York Times, or ProPublica or whatever, but they can’t see which articles you’re reading. So it’s a great user privacy feature as well. You may have seen extended verification, say, when you visit your bank’s website. Depending on your browser, you’ll see—on the left hand side in the url bar—a line of often green text that says something like “Citibank,” or the official name of the organization. What that means is that, in addition to being a secure, protected connection (HTTPS), a third party has contacted that organization and confirmed that that domain is controlled by the entit it purports to be. Because in reality there’s not a lot stopping me from going out to an internet service provider and buying the domain name ProPublica2, and publishing things there. And the question of authenticity can easily be muddied if somebody buys a domain that looks a lot like an existing one. Other than that sort of technical thing, at the moment, the Associated Press publishes their news values online—which I think is great. Just being really transparent and up front with readers, so that they understand what your journalistic process is, and how they can engage with you, is really valuable.

Enter: What do you currently perceive to be the greatest threats to our democratic process?

McGregor: On the one hand that there has been, particularly in the last several years, a loss of faith in institutions. And if people don’t believe in our civic systems, then our greatest risk is a loss of participation: in public life, in commercial life, and as a withdrawal of the public from these processes and institutions. That being said, I think that—perhaps counterintuitively—what we’re seeing now seems like a substantial reengagement. We have more people running for office then ever before, and we’re seeing organizing on levels that we haven't seen in a long time. Whatever happens with the midterm elections, I hope we don’t lose this sense of engagement—either in journalism, or with the public at large. We need to stay involved. On the journalism side, that means continuing to do aggressive investigative reporting, and continuing to follow up not just on the news of the day, but on what’s happening in the policy spaces. These may be less sensational, but they are really important to people’s lives. And likewise, for the public to stay engaged with public institutions: to keep giving feedback to news organizations, keep voting, keep running for office.

Enter: I agree. Now let me rephrase the question a little bit. What technologies do you consider the greatest threats to the democratic process?

McGregor: I have a little bit of a hard time answering that—in part because technologies are what we, as a society, decide they can be. We can build technologies that do terrible things, we can build technologies that do wonderful things. Sometime they’re the same technology, in a different context or use case. So I’m less concerned about the technologies than with the lack of regulation and oversight.

Enter: But what if the threat is coming from a foreign player, or from an organization that’s not subject to current legislation?

McGregor: Obviously we’re all thinking about the Cambridge Analytica-Facebook stuff and all of that, right? Regulation of our own organizations would substantially prevent some of the abuses we have seen in recent years.

I just did a report on the General Data Protection Regulation (GDPR). This is now happening in Europe in a very intense way—the EU has been watching what’s been going on with these technologies for a decade or so, and they’ve been very detailed and thoughtful in thinking about it. The GDPR goes into enforcement on May 25th of this year, in the entire EU. It sets very strong limits on what companies can do with an individual’s data, and establishes very strong rights on the part of the data owner—and as an individual, you become a data owner. All of the stuff that happened with Facebook would be absolutely prohibited under the GDPR without a very specific form of consent. Now there have been calls for Facebook to apply GDPR rights and protections to the US. Obviously, no one knows how that’s going to happen, but some companies are doing it. You may have gotten a notice, if you use LinkedIn, saying “we’ve updated our practices because of the GDPR.” And there’s really no choice for these companies, unless they’re going to write off Europe. I think it’s quickly going to become a global thing, and it will totally change the landscape of many of the risks and threats that we have been grappling with in the last couple of years.

Enter: What are some of the new ideas you are developing to reshape the future of information distribution and security?

McGregor: One of the key things I’m working on is an authentication scheme. It requires thinking about multiple systems. So it’s not just about the publisher, or the person who puts something online; it’s also about the web browser. I think of it is basically like a change log. Let’s say a publisher publishes an article. There are well established cryptographic methods for labeling that this article was published by this organization, at this time, with these contents. Leaving aside content management systems and all, in principle that’s a very easy thing to do. Then the question is, how does the user understand that? That’s where you have to start thinking about the browser. And the browser can verify that label, with a little message that says, “This article was published by such and such organization at this time, with these contents.” Or, it might say: “This article was updated at this time, in this way.” It’s a system that acts a little bit like a lightweight wiki—just as wikis show how things have changed over time. But in a way that is cryptographically signed, so that it can be confirmed independently, and any manipulation would be revealed.

Enter: That might really mitigate the risk of some weaponized technologies.

McGregor: I think that this type of system—an authentication system for online digital content—is increasingly important. Because we’ve reached a point where the real time manipulation of video is really here. You can manipulate the video stream of, let’s say a politician, in real time. If you look closely, you can kind of tell the difference—but for someone who's not really paying attention, or at low resolution, it’s very difficult to distinguish. And that is scary. But again, because of the way these things work, the question is, how do we use the possibilities of technology to say this is a genuine document, this is a genuine photograph? What does “genuine” mean? Genuine means, perhaps, that I can verify who took the photo, or what organization it belongs to. It doesn’t tell me what’s in it, and it doesn’t prove to me that what’s in it is what someone says is in it.

Enter: Yes, the more subjective aspect.

McGregor: So there are two aspects to verification. There’s the technical side: Has this media been altered from its original? And then there’s another side that asks, is the media about the thing that it says it’s supposed to be? Technical systems can help us when someone writes an article and pretends they found it on the New York Times. But there’s much less that technology can do with the question, “Are the statements in this story true?” Because that side of verification—the “truthiness” of material—is both humanly and socially constructed.

Enter: Where are you with all this?

McGregor: Something we are doing at the Tow Center—without making any judgements about a specific story, and through a somewhat automated process—is collecting metadata about publishers: “This website has been around for 10 years, it has a five person masthead, their domain registration indicates that they are owned by this company, they are based in this place…” Things that make you feel that it is tied to the real world. Is there a human tracebable to this website? I think a lot of the skepticism around “media” comes from the concern that something we read was written by a bot, or by a propagandist, maybe based in a foreign country.

Enter: Some people will still eat that stuff up.

McGregor: It’s still up to the reader, of course, to decide what they care about. But it’s more information, and I think that’s important—especially on social media, where presentation quality is really uniform. A very amateurish story from a random blog gets the same visual treatment and weight as a Washington Post article. And so the cues that a reader has, to be able to distinguish between the sources and the processes behind those sources, is diminished. How we can help surface some of that information so that readers can decide how they feel about the content?

Enter: Is there a final thought you’d like to express?

McGregor: There are things that I told you about, like the real time video manipulation, that are potentially very scary. But to me, the real takeaway here is that it is all in our hands. We can choose—and not just as technologists, or as journalists, I mean we as a society—we can choose what technologies are allowed to happen, and what they are allowed to do in our lives. That’s something I don’t see people talking about a lot. Because we do have agency in this process. We can say we don’t want our data collected, and we don’t want our data shared. It means passing laws, it means enabling oversight and giving that the authority to really penalize transgressors. Technology is not nature; it is not just a thing to which we must reconcile ourselves and be subject to. We can, and should, exert ourselves to ensure that it works in service of us, rather than the other way around. So as much as I think that there are scary things out happening there, none of this is inevitable. Get involved with activism, including social media activism, and be vocal. Sure, delete Facebook—and call your congressman. That would be my final thought.

Jeff Greenwald
Jeff is a best-selling author, photographer, and monologist.