The theme of verification ran through all 64 interviews. Managers and senior editors were quick to emphasize the importance of verification in relation to user-generated content, as well as their fears about using material that turned out to be incorrect. Many people discussed famous examples of news organizations being faked by hoaxes, and their concerns about it happening to them.18 As one senior news manager stated, “I think the biggest issue for us is around verification because that’s where our reputation lies. If we end up putting stuff out which is wrong in any way, fabricated in any way, then our necks are on the line.”
There was, however, little awareness about the specific techniques and processes associated with verifying UGC.19 People knew it needed to be done but there was an acknowledgement from journalists on news desks that they didn’t feel like they knew enough about how to do verification properly. One very honest description of an editorial meeting by a senior editor revealed how the process of verification is considered in that newsroom:
Verification is always an afterthought. It’s sort of like, “Let’s just get it on air and online and then not worry about it.” It’s always an afterthought. When someone puts something forward in an editorial meeting, you say: “Have you verified it?” And people groan. People are scared of the “v” word. They know it’s going to take a long time.
There was still a sense from many managers and senior editors (who don’t work with UGC every day) that with journalistic experience comes a gut instinct that enables you to know whether something can be trusted. There was also a sense that verifying a piece of content is something that is black and white—something is either true or false, accurate or inaccurate.
When pushed to describe specific technical checks that journalists could run on social content, very few interviewees displayed any knowledge of the different ways geo-positioning and timestamps work on the different social networks, the power of mapping technology, or the information about a digital photograph available via EXIF data.
It was rare to hear people talk about verification as a process, like building a legal case. They should be looking for clues that help build that case, and in almost all pieces of UGC there will be some doubt about one element of the material. Whether or not to run UGC rests then with an editor of a program, section, or article. The fact that the process of verification includes so many different factors and variables means these decisions are very rarely a simple case of true or false.
Many people who didn’t regularly work with UGC described social media verification as having the same characteristics as any other type of fact-checking; those who work in roles where social media verification is part of the job, however, talked about it very differently.
The AP has a clear process whereby the uploader of the content has to be verified separately from the events being shown in the footage. Similarly, Storyful verifies the source, date, and location of each video separately—labeling each element as either confirmed, corroborated, or unconfirmed. This information is then shared with clients. Information about the checks carried out by these two agencies are detailed on dopesheets, allowing their clients to undertake their own verification checks if they so wish. Some newsrooms carry out an additional layer of independent verification on material shared by the agencies, but the vast majority do not, believing that “is what they are paying agencies for.”
Perhaps most alarming was the ignorance about the problem of scraping, the practice whereby people simply copy pictures and videos and upload them to their own accounts. As someone from an agency warned:
A fundamental problem that the entire industry faces is that user-generated content is used in its most available form as opposed to its original form. It is quite likely legitimate in the sense that it shows what happened, but I think that’s a major problem because it takes away all of the context and all of the original information that is connected to the video. I think it’s because people use technology to surface what’s essentially trending as opposed to finding the original piece of content or tunneling through to see what the original posting was. If there isn’t any original information and context, then what’s been added to it in the version you’re looking at may or may not be true.
Even analyzing the tweets and messages journalists send to people who have uploaded footage to the social Web immediately after a breaking news event demonstrates how rarely journalists think about these dangers. They will ask for permission to use the picture, without asking whether the actual person took the photo or shot the video. This clearly has issues related to copyright, but it has even bigger issues related to verification.
The question of checklists and systemized processes was asked of every interviewee. There was resistance about the need for standardized verification systems, with people arguing that every piece of content is different and on desks where UGC is regularly used, there was an acceptance that staff just knew which checks had to be completed.
However, the people who are making decisions about output displayed the most ignorance about the technical checks that could be run, and how these could be integrated with traditional verification and fact-checking techniques. They were the ones who were most likely to rely on a “gut instinct.” More UGC-savvy producers suggested the need for implementing clear flagging or traffic light systems whereby pressured output editors could quickly see which checks had been run, which elements had been confirmed, and which elements had been corroborated but not confirmed to visually represent the reality of the sliding scale of verification.
Who Should do Verification?
The question of who should do verification differed from newsroom to newsroom. The model of the BBC’s UGC Hub has not been replicated on a similar scale, although there were certainly newsrooms that realized the importance of creating special desks, even if they only contained one person specifically working with UGC. These special desks were predominantly focused on content coming from Syria, and increasingly Egypt, and were frequently staffed by Arabic speakers. But again the emphasis placed on these people was about their knowledge of the location and language, and therefore their ability to discover original content and cross-reference this with local expertise. It was very rare that staff on these desks had been given specific training on verifying online content.
Overall, there was a sense that all journalists should be responsible for verifying the content that they discover, in spite of very little specific training on verification (with some exceptions, notably ABC Australia) of content discovered on the social Web or sent in to newsrooms via email or upload technology. There was only one senior manager who discussed the need for training people on how to make both editorial and technical assessments of content sourced from the Internet.
And while there was agreement that skills have improved slightly—for example, more people know how to do a Google Reverse Image search or know how to identify fake Twitter accounts—there was also an acknowledgement that there is quite a lot of ignorance about ways to verify content systematically. As always with digital skills, it is impossible to know what you don’t know, and many people do not know what is possible, whether it’s the possibility of using Wolfram Alpha to check weather reports for a certain location on a specific day, the ability to check who registered a website or blog, or the information included in EXIF data. There did appear to be a trend among television journalists who often send material they discover to the online team, believing that people working in the online space have a better sense of whether something can be trusted.
Reliance on News Agencies
There is a significant reliance on the agencies for verification, and the majority of newsrooms do not run additional checks. As one foreign editor said:
AP says that when they put that material out from YouTube, they have done the same verification process that they would do with writing a wire story […so] we wouldn’t do an additional verification on that, because if we did that, we’d do it for every single story they put out.
The strength of the agencies is their networks on the ground. Both the AP and Reuters talked in detail about the role their regional bureaus play in servicing content, and the importance of their local knowledge and language expertise when verifying it. They are very aware of the need to talk to the person who has supposedly shot the footage in order to strengthen the verification process.
It is worth noting that there are different approaches to verification at the AP and Reuters. The AP has standardized technical checks and processes carried out in London after content has been discovered and filtered by the regional bureau that found it. Reuters relies similarly on its bureaus but, while the content is sent to London for cross-checking, there isn’t a formal procedure for verifying social content.
Some newsrooms, mainly public service broadcasting organizations in Europe, do additional checks on content from the agencies. There was a view among them that you cannot “outsource verification.” But even those organizations that run additional verification checks on UGC from agencies recognize that if that material has already been pre-vetted, the pre-vetting is one consideration in their own verification process.
Verification and the Audience
Certainly none of the verification processes or checks undertaken by the newsrooms were shared with the audience, either on television or online. The only mention of verification was the phrase, “These pictures cannot be independently verified,” which is heard very often when UGC is aired. We heard a great deal of soul-searching about the idea.
On the one hand the AP has “abolished the phrasing, ‘This cannot be independently verified,’ ” and a report published by the BBC Trust in the summer of 2013 advised that it should not be used on screen or in script20 (although we saw some occurrences during our sample period). In other newsrooms, however, it is a standard description, especially when referencing content from Syria.
There is a shared awareness that because it is rarely possible to be 100 percent certain about the veracity of a piece of UGC, this phrase acts as a type of insurance policy in case it turns out that the content has been manipulated or misattributed. Some editors and journalists actually saw it as a mechanism of being honest with the audience. As one editor stated, “Particularly where Syria is concerned, if we’re not 100 percent sure that it is what it purports to be then, yes, we will say [this cannot be independently verified]. I have absolutely no issue, and neither does the channel, in being honest with the audience, and I don’t see that changing at all.”
But there was also concern that this phrase can frustrate the audience and undermine trust, as it suggests that verification checks have been completed inadequately, and fails to communicate the checks and internal newsroom conversations that have taken place in deciding whether or not to use the content. As one journalist said, “I don’t necessarily like that we have to say it but you’ve got to. We trust our journalists, and we trust our contacts on the ground enough to know that this is what it says it is, but we’ll put the caveat in.”
Another producer felt that with certain stories, if a video illustrated a pattern that other sources confirmed, being unable to absolutely verify it in terms of date or exact location didn’t matter if it visualized something important:
I would say that, for example, with the barrel bombings [in Syria] you can justify putting user-generated content onto a site and saying, “We have not verified this is true, but we know that this is happening all over the country and if we have videos of places being barrel-bombed, we would say it is justifiable to push it out in that way as long as you are informing people that they haven’t been 100 percent verified.
There were different perceptions about the role of public verification, or publishing content before it has been fully verified in the hope that the crowd can help with the process. Andy Carvin made this form of crowdsourced verification famous during the Arab Spring when he began using Twitter as a mechanism for understanding what was happening on the ground. The same idea of collaborative verification is now happening within Storyful’s Open Newsroom community on Google+.
One journalist raised this issue of publishing content before all verification checks had been completed. He explained that in some situations, when they believed it was in the public’s interest to see certain images, a write-up would be published and a link included to the unverified footage with a disclaimer. It would then be updated with verification information when it was completed. This was a rare position, however, as almost all other newsrooms we spoke with were adamant they would not publish content unless they believed it to be accurate.
Newsroom members regularly cited issues relating to the pressures they’re under to publish content before all verification checks are complete in our interviews. One senior editor at the AP said, “I would always rather be last to a story than first to be wrong to a story, and, you know, last to be right with UGC is not a dishonorable place to be.”
While many shared this view, the pressure agencies face is slightly different when it comes to newsrooms with audiences and competitors. As someone who used to work on a newsgathering desk said:
There is still way too much pressure within news organizations to get stuff up on air before it’s properly verified, before the proper questions have been asked, and there’s just no excuse for that. And no matter what anybody says, in any news organization that absolutely exists and is an issue.
And this quote from someone who works on verifying content within a large newsroom:
There’s such pressure to get things on, especially if they’re watching the competition, and they’re running with stuff, and so we have to be really steadfast and put our foot down. Even though producers know they should be verifying it, I can see them being overtaken by the pressure to get it on air.
The pressure newsrooms feel they are under to “tweet first, verify later”21 is a symptom of a news environment where a scoop today lasts 20 seconds at most. You can no longer stay first for long, and as many commentators have discussed, journalists are the ones who are obsessed with the notion; audiences rarely notice.22
Overall, verification is the main area of concern when it comes to the topic of user-generated content. However, despite this anxiety, very few newsrooms are using systematic verification procedures and journalists feel they don’t have adequate expertise and would like more specific training on verification. Managers are obsessed by the issue of verification, but have very little knowledge of the specific technical checks that can support the editorial information sourced from uploaders and local experts. Probably because content sourced from social media did not exist when they last worked on a news desk, there is a striking disconnect between managers and those who work with user-generated content.
As one senior editor said, “In terms of the verification processes, it’s very hard. We do our best, but every case is different. There’s no system you can set up that makes that work.”
In fact, as newsrooms are under more and more pressure, it is even more important that there are systematic procedures in place that can provide clear guidance to the output editors about which checks have been completed, and the level of confirmation regarding specific facts about the footage.
The agencies currently play a critical role in verifying the content that appears on people’s television news screens. However, when the newsrooms themselves have limited knowledge about the checks and procedures that can be carried out around content sourced online, it makes it difficult for them to ask questions of the agencies about those that have already been carried out.
The pressure on people to publish quickly will only continue, but there seems to be a growing recognition that newsrooms need to use verification and context to differentiate themselves. Research by the Pew Research
Center in 2012 revealed that worldwide, YouTube is becoming a major platform for viewing news. In 2011 and early 2012, the most searched term of the month on YouTube was a news-related event five out of 15 months, according to the company in early 2012.23 With audiences increasingly seeking out eyewitness footage on social media, news organizations have to distinguish themselves. As one journalist admitted, “People get [news-related pictures and videos] on Twitter anyway, without the verification, so if you’re going to use it on air, what you’re going to have to bring the audience is the story behind [the pictures].”
19In January of 2014, The Verification Handbook was published by the European Journalism Centre. It was edited by Craig Silverman and includes detailed explanations of how to verify digital content. It is available at http://verificationhandbook.com. 20 BBC Trust (2012) and BBC Trust (2013).
21This is the title of a study by Nicola Bruno, published in 2011.
22M. Ingram, “The Future of News Isn’t About Breaking News Scoops, It’s About Credibility and Trust,” GigaOm, 7 May 2014
23“YouTube & News: A New Kind of Visual Journalism,” Pew Research Center: Project for Excellence in Journalism, 2012