Category Archives: Conferences

Varieties of online gatekeeping

This week, I’ll be at the Rethinking Journalism II workshop organized by Chris Peters and others at Groningen University in the Netherlands.

I’ll speak Friday about varieties of online gatekeeping, and how we might analyze them. I don’t have the answer, but I’m working around ways of asking the question in a way that is intellectually interesting and practically useful, so I’m looking forward to feedback and suggestions, from the workshop participants, and from others interested in the topic.

My starting point is the notion of “gatekeeping”, used by journalism scholars to capture how news organizations filter information before it is passed on to users, and the observation that news organizations no longer occupy as central or singular a role as they have in the past in terms of doing this filtering work, as people increasingly rely on search engines like Google and social networking sites like Facebook and Twitter as ways of accessing news.

Sometimes, people will talk about these digital offerings as ways of getting “direct” access to information, as examples of “disintermediation”, but of course, Google and Facebook too filters information, based on for example the PageRank algorithm and the EdgeRank algorithm. If we want to understand how journalism works today and how people get informed about public affairs, we need to understand both these new digital intermediaries as forms of online gatekeepers, and we need to examine their interplay with more traditional forms of editorial gatekeeping.

Below is an extended version of the abstract I’ve submitted. I’ll be working on this in the spring, both on getting the question right and on actually making progress on fleshing it out empirically, so any and all comments are welcome.

Varieties of online gatekeeping: a cross-national comparative analysis of news media websites, search engines, and social networking sites as gateways to news

By Rasmus Kleis Nielsen (Roskilde University and the University of Oxford)

News media organizations like newspapers and broadcasters have long functioned as gatekeepers between news and audiences, but with the rise of digital media, the search engines and social networking sites that are central to how most people navigate online increasingly complement news media organizations as gatekeepers shaping what is displayed as news.

Journalism scholars have traditionally focused on the role of journalists and news media as gatekeepers (see e.g. Shoemaker et al, 2009), but a growing number of researchers (e.g. Barzilai-Nahon, 2008; Chin-Fook and Simmonds, 2001; Hintz, 2012; Introna and Nissenbaum, 2000) have highlighted the need for a broader approach to gatekeeping in wider networked information environments where technology is increasingly integral to traditional gatekeeping practices (Anderson, 2011; Thurman, 2011; Coddington and Holton, 2013; Meraz and Papacharissi, 2013) and where non-journalistic actors too serve as gates between news and audiences.

In this paper, I adopt such a broader approach and outline three varieties of online gatekeeping that each integrate different technologies in the gatekeeping process, but do so in different ways and for different purposes. The three varieties are (1) editorially-based gatekeeping processes (typically defining what information is displayed as news on news media websites), (2) link-based gatekeeping processes (the core of how search engines like Google select what information is displayed as news), and (3) affinity-based forms of gatekeeping (the operating principle behind how social networking sites like Facebook determine what information to display in users’ news feed).

Journalists, often working in legacy news media organizations, still play a key gatekeeping role in terms of defining what information constitutes “news”, and news media websites remain amongst the most important gateways to news online. But they are increasingly supplemented by other, second-order online gatekeepers like search engines and social networking sites that, while rarely producing original content defined as “news”, increasingly serve as alternative and supplementary gateways shaping, through link-based or affinity-based gatekeeping processes, what information people come across as news online. Even as journalists and news media may feel they are being “dis-intermediated”, new digital intermediaries are arising (Pariser, 2011; Foster, 2012; Nielsen, 2013).

On the basis of the Reuters Institute Digital News study (Newman and Levy, 2013), a representative survey of online news users conducted in 2013, I proceed from these three varieties of online gatekeeping to present a cross-national comparative analysis of their relative importance in seven developed democracies with different media systems (Denmark, France, Germany, Italy, Spain, the UK and the US).

The comparative analysis demonstrates significant variation in the relative importance of each type of online gatekeeper from country to country as well as in-country variation by age, but also documents that search engines and social networking sites (overwhelmingly Google and Facebook) have in less than a decade come to rival news media websites in importance as gateways to news across all the seven countries covered.

Editorially-based online gatekeepers are the most widely used way of finding news online in countries like Denmark and the UK (with strong newspaper brands and public service broadcasters), link-based online gatekeepers (overwhelmingly Google) represent the most widely used gateway to news in countries like France and Italy (with weaker newspapers and public service broadcasters), and affinity-based online gatekeepers (most importantly Facebook) are the most widely used gateway to news amongst online news users in Spain (currently experiencing a major crisis of institutional legitimacy impacting legacy media as well as political institutions).

Editorially-based gatekeepers will remain important for the foreseeable future (especially as television remains the number one source of news for most people in most countries). But as online news become a more and more important part of people’s cross-media news habits in most countries, link-based and affinity-based online gatekeepers are likely to become more important parts of our networked news environment, raising new questions concerning what media pluralism means in an increasingly convergent world, concerning what information is made available to citizens and how, and concerning the future journalism and its role in democracy.

Future of Journalism, Cardiff Conference round-up

I spent the last two days in Cardiff for Bob Franklin’s biannual journalism studies conference hosted by the School of Journalism, Media and Cultural Studies (JOMEC). Lots of good stuff and great to see folks and catch up on interesting work being done around the world. (Full program here, abstracts of all papers here.)

Three take-aways from panels and discussions I attended (more at #FoJ2013 on Twitter for those interested)—

First, local and regional journalism and news information environments–

It was very refreshing to see several very good pieces of empirical research on the particular questions concerning local and regional journalism and news information environments in different contexts. I was particularly impressed with the work being done by Andy Williams and colleagues on local and hyperlocal journalism in the UK, Julie Firmstone and Stephen Coleman’s work-in-progress on the local information environment in Leeds (including studies of the city council, legacy news, and new digital sites), as well as research by Piet Bakker and colleagues from the Netherlands on developments there. Very good stuff. It would be great to see more studies from other countries so we can develop a more comparative understanding of what is going on with local news and information environments in different contexts. (Some work has been done in the US too.)

Second,the ubiquity of the New York Times–

It is clear that the New York Times continues to hold enormous sway over the imagination of both journalists and journalism studies scholars thinking about digital and digital strategy. As Piet Bakker rightly remarked after Robert Picard’s keynote lecture, “everyone talks about the same three examples: the New York Times, financial newspapers like the Wall Street Journal and the Financial Times, and the Guardian.” Of course, all of these are highly unusual cases, from which we can probably learn relatively little about how digital is developing and working out for other news organizations, including top titles in small national markets (that is, much of Western Europe), but also, apropos my point above, local and regional newspapers like the Western Mail in Wales (studied by Williams et al), the Yorkshire Post (studied by Firmstone and Coleman), and their equivalents in other countries. As I’ve argued before—as many others have—even if we have to recognize the empirical fact that the New York Times figures prominently in how lots of people talk and think about digital strategy, the actual news organization and company itself probably can’t even tell us much about how other US newspapers are faring, let alone how newspapers elsewhere are faring. There’s an analogy here to the role for example the Barack Obama campaign plays in discussions of digital politics. (As Oscar Westlund pointed out in one discussion, it’s well known from studies of organizational learning that you often make your biggest mistakes when you learn from the wrong examples.)

Third,lots of good, theoretically and methodologically diverse, work on digital–

Journalism studies continues to catch up on digital, lots of good work on innovation, the integration of new technologies in newsrooms and work practices, how ordinary people engage with news etc through digital, and also some work across platforms that takes digital seriously without giving up on legacy or ignoring legacy media’s enduring importance. The field of journalism studies, from my impression, has done a better job of overcoming sharp analogue/digital distinctions and “old media”/”new media” binaries than many other areas of media and communication studies including, I hate to admit as someone who also has an intellectual home there, parts of political communication research. In part, it is good to see how a conference like this draws not only people who consider themselves journalism studies scholars, but also a sizable contingent of audience researchers (very interesting papers by Regina Marchi from the US and by Tim Groot Kormelink and Irene Costera Meier from the Netherlands on tailor-made news), a few media economists, people studying management, etc. This kind of diversity is surely a necessary part of understanding journalism today.

Prospects for global and national news, what about local?

Developments at leading national newspapers building their (paying) digital audience both in-country and internationally give reason for some cautious optimism concerning the future of global and national news, but it is not clear that we can learn much from the models rolled out at these papers when it comes to the important question of the future of local and regional news.

That’s one of my takeaways from a fabulous 30th Anniversary Weekend celebrating the Reuters Institute’s fellowship program for journalists from around the world. (The program’s 30th anniversary, not mine…)

In addition to a great chance to catch up with fellows and friends from around the world, the weekend provided for several interesting discussions of developments in the business of journalism around the world, with presentations by the new New York Times Company CEO Mark Thompson, Natalie Nougayrede, the editor-in-chief of the French daily newspaper Le Monde, and John Stackhouse, editor-in-chief of the Canadian Globe and Mail.

A few highlights from their presentations—

  • All of them recognized the challenges their organizations have faced and still face in a changing media environment, but all also spoke with confidence and vision about a future in which an expanded range of editorial content across more platform and a greater reliance on reader (viewer/listener/user) payment will continue to provide us with great journalism.
  • All stress their ambition to stand out from the empty calories of breaking news “churnalism” to create value for their users (in Nougayrede’s words: we need to get it first, but also get it right and get it rich). Especially Mark Thompson spoke out very strongly against the idea of “paid advertorials” or “native advertising” blurring the line between editorial and advertising.
  • All predict hybrid print-digital models for the foreseeable future—Mark Thompson said that internal modeling at the NYT suggest that print will be a key part of the company and news organization “for much longer than many people imagine”.
  • All present business models based on print sales and advertising revenues combined with digital advertising plus an increased emphasis on digital sales and an expanding range of ancillary products based on each news organization’s brand and reputation (conferences, seminars, etc).
  • All of them are heading news organizations with lower revenues today than in the 1990s, but all are also painting a picture where the coming years may look better than the dramatic declines of 2007-2012.

It was all very interesting, and many of the journalists in the audience remarked that it was refreshing to hear such confident and relatively upbeat presentations after years of doom and gloom, at least in North America and much of Europe.

Of course, all three speakers are keen to promote their vision for their respective title, and to boost their future prospects, but I thought every one of them remained mostly reality-based on their presentations and I agree that there are reasons for cautious optimism when it comes to the future of titles like especially the New York Times, but also nationally-dominant quality brands with some potential for global reach like the Le Monde (Nougayrede spoke of plans to expand the title’s presence in Francophone Africa, where some countries have a sizable and growing professional class).

But, just sticking to the case of the New York Times and Le Monde, I continue to wonder how much we can really learn from their experience when it comes to the future of other newspapers, especially local and regional newspapers and newspapers in smaller countries.

Mark Thompson quoted Chris Anderson, Emily Bell, and Clay Shirky’s important caveat in their report on “Post-Industrial Journalism” from last year to the effect that every statement in discussions of the future of journalism that begins with the sentence “let’s take the New York Times as an example” ought to be discounted because the NYT’s experience really isn’t representative of anything else. It is in a set of one, and only few other titles, perhaps the Financial Times and the Wall Street Journal, can be compared with it in any meaningful way.

Even the step from being a leading national title (with considerable global potential) in a market of 315 million to being amongst the top titles in markets of 60 million (like France) with less global potential is huge in terms of getting to critical mass both in terms of digital advertising and digital sales. The situation in a country like my native Denmark, with a population of 5.5 million (less than New York City or Paris alone) is of course very different.

For both national papers in smaller countries and regional and local papers in bigger ones, the problem of critical mass—which is central both when it comes to digital advertising revenues and digital sales, where people often speak of aiming for a “conversation rate” of maybe 1 per cent of monthly unique visitors signing up as paying readers—is even more pressing than at titles with potential for global reach. Mark Thompson called the New York Times a relative minnow when it comes to digital advertising (volume is required both to generate revenue and to enable good behavioral targeting of advertising). He is right, of course, when comparing the Times to Google, Facebook, etc, but if the NYT is a minnow, I don’t know what the word would then be for other newspapers.

We can see the problems of reaching critical mass both in small countries like Denmark, where even the top national newspapers are in a situation quite different from that of the New York Times or even Le Monde. The New York Times can support a newsroom with more than 1,000 journalists in a country of 315 million (with an additional about ten percent of its digital subscribers from the rest of the world). Divide that by 60 to get to Danish size, and you would have a newsroom of less than 20. Even after several years of cost-cutting, the newsroom at a title like Berlingske (currently being shopped around by its debt-burdened British owners Mecom) is much, much larger than that, but it remains an open question for how long it can sustain such an investment in quality journalism.

We can also see the problem of critical mass at the hundreds of local and regional daily papers that make up the majority of the US newspaper industry, employ the majority of US journalists, and produce much of the independent coverage of public affairs. The Newark-based Star-Ledger, for example, is critical in terms of covering New Jersey’s notoriously corrupt and incestuous politics (what other Western newspaper has a section simply called “corruption”?). Based across the Hudson River from the New York Times, it is losing money even though it has a daily paid circulation larger than Le Monde or several Danish daily papers combined (over 340,000 on weekdays). And though it is growing its digital subscriber base, it is hard for the Star-Ledger to build a digital business, as many of the readers it caters to already get some local news for free via broadcast and web from local television stations etc and many get much of their national and international coverage from national sources or larger neighboring newspapers like the Philadelphia Inquirer (in South Jersey) and the New York Times (in North Jersey). The recent trouble at AOL’s Patch and Advance Publication’s decision to scrap AnnArbor.com suggest that online-only hyperlocal journalism is even harder to sustain on a commercial basis. The problems of rapidly eroding print revenues and very limited growth in terms of digital revenue also bedevils much of the European local and regional press (though there seems to be some exceptions such as in Finland where the regional press seems to be doing better than the national press).

So, even though the presentations by Mark Thompson and even Natalie Nougayrede from Le Monde provided some reasons for cautious optimism when it comes to the future of global and national news, I’m not sure we can learn all that much from the experience of the New York Times when it comes to newspapers in smaller national markets or when it comes to regional or local newspapers.

This problem—especially the future of local and regional news—is thus intellectually distinct from the problem of the future of national and global news, and of separate importance not only for the business of journalism, but also very much for democracy. Though news tend to focus on national politics and national issues, most of our lives are lived locally, and much of our politics and government play out locally, watched by and reported on by ever fewer journalists. That, I think, is a problem in terms of accountability and in terms of the independent sources of information available to citizens about local public affairs.

American Political Science Association 2013 annual meeting round-up

I’m not going to try to summarize everything I went to at American Political Science Association 2013 annual meeting, nor go into details with all the great papers given by colleagues, friends, and complete strangers, but just highlight a few key take-aways for me.

Lots of laurels—

1)      At the political communication pre-conference, I was very happy to see some great work done on local news, local-level political campaigns, and the interaction between the two. I’m looking forward to seeing what comes out of Frankie Clogston and Joshua Darr’s doctoral work, both gave very interesting presentations. We know much about local politics from political science, but less than we ought about local news and local political communication.

2)      I found the comparative work presented by James Stanyer and others on “intimate politics”, the ways in which politicians, often initially deliberate and for strategic reasons, but in the longer run with unintended and uncontrolled consequences, seek out soft news coverage of what might conventionally be considered their “private life”.

3)      And then I was reminded how deeply engaged a subset of political communication researchers are with psychology. I’m far from fluent in this area of research, but I went to some interesting presentations based on affective intelligence theory, elaboration likelihood theory, and theories of motivated reasoning working to advance our understanding of how people process political communication, form attitudes, and ultimately act (or not) politically.

But also some darts—

1)      The study of information technology and politics is gradually infiltrating other areas of political science, as it should, as digital technologies increasingly become ubiquitous, domesticated, integrated into everyday life (mundane media rather than “new media”). This is a good thing. Digital media are far from everything, but they are increasingly everywhere. Now it would be nice if the latecomers to the party would read up on the research done by those, political scientists and others, who arrived early. A lot of people even today build their arguments against strawmen-type arguments from early digital utopians like Nicholas Negroponte or cite papers from the late 1990s and early 2000s as if the internet is still the same thing. (See Dave Karpf’s great piece on the problems of research in internet time.)

2)      Political communication scholars based on political science departments rarely read the work of media and communications scholars. (Sigh.) They should, even though they’ll find some of the theoretical and methodological vocabulary foreign, there is much to be gained from a better understanding of how media systems are changing, how audiences engage with new media, how journalism is evolving, and how digital media rarely constitute a separate dimension but are parts of cross-media use, cross-media news coverage, and cross-media strategic communication. It is called political communication for a reason.

3)      Tons of people study Twitter rather than Facebook, often because it is easier to access the data. Twitter is important, but it’s hugely problematic if we let data availability guide our overall understanding of social media in politics. It is a parallel to the tendency in journalism studies to study print media/textual news rather than broadcasting/audio-visual news, in part because it is easier. Just as we need more studies of broadcast news (also in a changing media environment where TV is increasingly intertwined with other media, but still the single most important source of news for most citizens in many countries), we need more studies of Facebook and YouTube, far more widely used than Twitter by ordinary people.

An ever-more unequal playing field? Campaign communications across digital, “earned”, and paid media

Cristian Vaccari and I will be presenting a first slice of our 2012 data on campaign communications in competitive U.S. congressional districts across digital media, “earned media” (news coverage) and paid media (campaign expenditures on advertising, canvassing, direct mail, online marketing, etc) at the American Political Science Association 2013 annual meeting in Chicago.

We show that most of these forms of campaign communication are highly unevenly distributed. A minority of candidates draw far more supporters, more news coverage, and raise more money than the rest, even when one is looking only at major party candidates (Democrats and Republicans) running in similarly competitive districts.

Contrary to the view that the internet may help “level the playing field”, we find that popularity on digital media like Facebook is in fact far more concentrated than both visibility in mainstream news media and money raised and spent during the campaign. (This is in line with Matt Hindman’s earlier work on the winner-take-all tendencies of much political communication online.)

Three key empirical take-aways from the paper—

  1. Most candidates draw limited news coverage and few supporters on social media like Facebook and Twitter, and hence remain highly dependent on paid media to reach voters, despite the fact that almost all of them use almost all the digital media at their disposal (websites, Facebook, Twitter, Youtube, etc).
  2. In both 2010 and 2012, paid media is unevenly distributed, earned media/news coverage is more unevenly distributed, and digital media/social media followings the most unevenly distributed. (Social media in 2010 discussed in greater detail here.)
  3. The general (uneven) pattern is the same in 2010 and 2012. If anything the inequality increases, especially in the case of digital media. Hence the notion of an ever-more unequal playing field as digital media—the most unevenly distributed form of campaign communication we examine—becomes relatively more important.

Abstract below, full paper here. We’d be interested in comments as this is work-in-progress and we are very interested in how to best compare the apples and oranges of digital media, earned media, and paid media in a meaningful way.

 An Ever-More Unequal Playing Field? Comparing Congressional Candidates Across Digital Media, Earned Media, and Paid Media

Rasmus Kleis Nielsen (Roskilde and Oxford)

and

Cristian Vaccari (Royal Holloway and Bologna)

 Abstract

 In this study, we analyze patterns of digital media, earned media, and paid media performance among major-party candidates in competitive U.S. Congressional districts in the 2010 (N=112) and 2012 (N=120) election cycles. Based on standard concentration indices, we analyze the distribution of (1) interest from internet users (“digital media”), (2) visibility in news coverage (“earned media”), and (3) campaign expenditures (as an indicator of “paid media” like direct mail, television advertising, and online marketing) across a strategic sample of 464 candidates engaged in competitive races for the House of Representatives. We show that most of these forms of campaign communication are highly concentrated. A minority of candidates draw far more supporters, more news coverage, and raise more money than the rest. Contrary to the view that the internet may help “level the playing field”, we find that popularity on digital media like Facebook is in fact far more concentrated than both visibility in mainstream news media and money raised and spent during the campaign. By 2012, the most popular candidate in a district drew on average almost nine times as many social media supporters as her direct rival, compared to three and a half times as many local news stories and about four times as many dollars spent. The differences in terms of digital media and paid media had both increased since 2010, while the differences in terms of earned media had decreased. Thus, while success on the internet might occasionally benefit challengers and outsiders in US major-party politics, the overall competitive environment on the web is far from a level playing field and may in some ways exacerbate inequalities between resource-rich and resource-poor candidates. As digital media become more important parts of the overall communication environment, we may thus be moving towards a more uneven playing field.

2013 ICA conference round-up

Back from the International Communication Association’s 2013 Annual Conference in London. These things can be hit and miss, with thousands of researchers presenting papers in a multitude of parallel sessions, but this one was a hit for me. I had a very good conference, catching up with colleagues and hearing some interesting presentations on changes in political communication, innovations in journalism, and the increasing importance of various forms of algorithms in shaping our information environment.

So, far too many good things to properly summarize or name-check here, so instead I’ll zoom in on a continuing concern for me that the conference did nothing to dispel—namely the concern that most theoretical and empirical work on the implications of the rise of new digital technologies for how we communicate (and by extension for democracy, social relations, etc) focus on the intersection between spectacular cases and early adopters.

Basically, we have many more studies of how digitally savvy and highly wired elites and of cases like Andy Carvin’s coverage of the Arab spring, of Kony 2012, of the Obama campaigns, etc than we have of how the wider population and ordinary activists, journalists, and politicians engage with and connect in part through digital media.

We need more studies of ordinary people, we need more studies of ordinary organizations, we need more studies of ordinary campaigns. And we need more studies of failures. Not because early adopters and spectacularly successful campaigns do not matter. They do. But because they are not representative of the general experience, and because they are not necessarily forerunners for where everyone else will eventually find themselves. They are outliers on very skewed distributions, low-probability events with high visibility and sometimes high impact. They rarely represent the future modal outcomes.

I know studying the role of the internet in your local newspaper or local community activists’ daily work, or amongst working class folks in a suburban community isn’t as sexy as studying the Guardian/Wikileaks collaboration, the Gezi park mobilization, or whatnot. But we need such studies too to understand what is actually going on, and what will actually be going on in the coming years. So here’s for a repeat of the sadly deceased Susan Leigh Star’s call that we study (seemingly) boring things.

Data-Crunched Democracy

I spent the day at Data-Crunched Democracy, an excellent conference organized by Daniel Kreiss and Joe Turow focused on the increasingly important role of “big data”, quantitative data analysis, and formal modeling in US political campaigns.

It was a very rewarding day with many interesting discussions and presentations by campaign staffers, consultants, lawyers, and others who had been involved in the 2012 campaign cycle.

It’s often hard to follow what’s actually going on this space without speaking to those involved because, as Lois Beckett from ProPublica, who is among the few journalists who have covered this area, “many campaign people lie to journalists about micro targeting and data use”. So, with that warning and caveat, a few take-aways from a rich day—

Where are well-resourced US campaigns at in terms of using data? As Rayid Ghani (Chief Scientist, Obama for America) reminded us, data-based modeling is probabilistic and mostly aimed at about marginal improvements in how resources are allocated for messaging, mobilization, fundraising, etc. It’s not a magic bullet, not necessarily as powerful or nebulous as some would suggest, and generally not as developed as the use of behavioral modeling is in much of the corporate world.

Ghani explained that big data-based modeling is hard to do in politics because of the low frequency of the behavior you are trying to model (voting, for example, is not someone we do that often) and because the context is important and can change dramatically from election to election (2004 versus 2008 etc). Targeting is–and several speakers, including Carol Davidsen from Obama for America as well as Alex Lundry and Brent McGoldrick who were both involved in the Romney campaign in various roles, underlined this–certainly getting better and better in terms of predicting people’s political behavior, but it remains probabilistic, and this is too often overlooked and/or misunderstood in public discussions surrounding the use of data by campaigns.

Modeling is also hard because though much data is available in the US after more than a decade of database-building, by the standards of computer scientists, it not much. As Ghani put it—and he worked for Obama—“this is the smallest dataset I’ve worked with.” In insurance, banking, health, and many areas of marketing, the datasets are much bigger and more detailed. (And one can easily imagine why—the resources available in those sectors are bigger than even the biggest political campaigns, let alone more ordinary campaigns for Congress etc.)

Right now, campaigns still focus on modeling people’s (a) propensity to vote and (b) their likelihood of supporting one or another candidate. Ghani suggested that in the future, there will be more focus on modeling “persuadability”, in predicting not only how are people likely to behave, but also how likely they are to be susceptible to specific kinds of communication from campaigns.

It will also, and this is something in particular Carol Davidsen (Director, Integration and Media Targeting, Obama for America) talked about, increasingly work across platforms and in the future increasingly focus on evaluating the impact of the massive amounts of money spend on television advertising, an area that several campaign staffers and consultants underlined remains the biggest line item in campaign budgets, and also the least accountable and the least data-based activity. Data from set-top boxes, the rise of IPTV, etc may change that in the future. Integration is the watchword here.

Before getting carried away in discussions of how new digital sources of data from television, from social media, from cookies across the web etc, it is important to remember, as Eithan Hersh made clear in his very good talk, that “campaign targeting is largely a function of public data availability” (and of course what Alex Lundry called the “solid gold” of volunteer or paid canvassing/phonebank-generated IDs, the “who do you lean towards voting for”-type questions asked at the door and over the phone by field campaigns).

In terms of public data availability there are interesting cross-national differences between the US and for example the European Union, which has adopted a “comprehensive” approach to data protection and has privileged privacy protection and where much of the information that enable “big data”-based microtargeting in American politics is simply not available. (Eithan was foreshadowing his forthcoming book Hacking the Electorate, which I’m very much looking forward to.)

The reliance on public records makes the use of data by political campaigns very susceptible to regulation and challenge the stance of some speakers—that the rise of these tools is “a force of nature” that we simply have to adapt to—and make clear that there are political choices to be made here.

In summary, the conference (tons of tweets under the hashtag #datapolitics with other people’s thoughts and observations) provided much information about what campaigns are actually doing today and what the main contemporary legal and political issues surrounding these practices are, but also underlined that

(1) fully articulated, cutting edge big-data modeling remains far more widespread and developed in the corporate world and parts of government than in the political world and

(2) is obviously linked to the resources (time, money, expertise) available to individual campaigns, so the 2012 Obama campaign was ahead of the 2012 Romney campaign, all other US campaigns are far less sophisticated than either, and most campaigns in most other countries (where there is less money in electoral politics and often less public data available) are even less sophisticated.

Is democracy then being “data-crunched”? There was no consensus in the room. Big data and increasingly sophisticated analysis help campaigns allocate their resources more effectively, have enabled them to expand and refine their persuasion but also–importantly– their mobilization efforts, arguably increasing both volunteer participation and voter turnout. It has also increased the risk of electoral red-lining, more fragmented public debates and segmented campaign communications, and strengthened the hand of resource-rich incumbents relative to those with fewer resources (including insurgent campaigns as well as individual citizens).

UPDATE–nice piece on the NYT bits blog summarizing a talk by Kate Crawford outlining “six myths of Big Data”.

2012 Midwest Political Science Association round-up

Earlier this month, I attended the 2012 Annual Meeting of the Midwest Political Science Association (one of my fixtures, having attended it more or less every year for five years or so).

A trio of highlights–one thing I did, a couple of things I attended, and one thing I did not attend but later caught up on.

First, doing–I had the pleasure of presenting some work in progress by Cristian Vaccari and myself, where we ask “What Drives Politicians’ Online Popularity?”  (the paper opens as a .doc here) on the basis of the same dataset underlying this previous paper. The panel, entitled “Campaigns, Elections & Technology” was one of those rare conference panels that had actual intellectual coherence to both the line-up of presenters and the discussion itself. Ben Epstein, Sounman Hong, and Christine B. Williams and Jeff Gulati all presented interesting papers. Betsy Sinclair did a great job as our respondent, and there was a good discussion with people in the audience afterwards, including Dave Karpf, Kevin Wallsten and others.

Second, attending–A couple of other panels I enjoyed were (1) “Mass Media and the Policy Process” were John Lovett and Frank Baumgartner presented a very strong paper asking when there is a single media agende, analyzing data over time, across issues, and between different outlets to show how media attention goes in and out of focus and (2) “Congressional Campaign Advertising” where a strong line-up examined various forms of strategic positioning vis-a-vis party brands, and the ways in which candidates and campaigns think about these choices and execute them.

Third, not doing, but catching up–I missed the presentation of a very interesting paper on “Career Concerns and the Behavior of Political Consultants in Congressional Elections” by Gregory J.Martin and Zachary Peskowitz from Stanford, but the paper can be downloaded here and it is a really neat piece of work that help advance our understanding of political consultants and the work they do.

The Absence of Americanization?

When Europeans concerned with developments in the media talk about “Americanization”–as Lord Puttnam in this old story from the Guardian–they are usually lamenting some development or other.

Tomorrow, I’ll be presenting a paper at the Future of Journalism conference in Cardiff arguing that, when it comes to market structures and media regulation (rather than, say, professional norms or forms and formats of content), these fears are overblown, and that we have, in fact, not seen convergence on an American-style media model over the last ten years.This is not to suggest that there is nothing to worry about, only that the notion (or rhetorical trope) of “Americanization” is of little use in terms of understanding our predicament.

The abstract is below–comments and feedback welcome, this is work in progress.

The Absence of Americanisation—media systems development in six developed democracies, 2000-2009

By Rasmus Kleis Nielsen (University of Oxford)

“Americanisation” is one of the most frequently used and mis-used terms in discussions of international media developments, a supposed trend much feared by Europeans who are (sometimes justifiably) proud of the distinct qualities of their media systems. In this paper, I present a comparative institutional analysis drawing on media and communications studies (Hallin/Mancini 2004), political science (Hall/Soskice 2001) and sociology (Campbell/Pedersen 2001) and based on data on developments in media markets, media use, and media regulation in six developed democracies (the US, the UK, France, Italy, Germany, and Finland) from 2000 to 2009. I argue that, despite frequent predictions of progressive “system convergence” (Humphreys 1996; Hallin/Mancini 2004; Hardy 2008), the last decade has been characterized by an “absence of Americanisation” of the news institutions in the five European countries considered. National institutional differences have remained persistent in a time of otherwise profound change. This finding is of considerable importance for understanding journalism and its role in democracy, since a growing body of research suggests that “liberal” (market-dominated) media systems like the American one increase the information gap between the advantaged and the disadvantaged, have lower electoral turnout, and may lead large parts of the population to tune out of public life. The finding also has theoretical implications, since the supposed drivers of system convergence—commercialisation and technological innovation—have played a very prominent role during the period studied, suggesting we need to rethink the role of economic and technological factors (and their interplay with other variables) in media system developments.

Internet and politics research, what next?

Just back from the European Consortium for Political Research conference in Reykjavik, where the internet and politics standing group, through the good offices of Anastasia Kavada and Andrea Calderaro, offered a string of interesting papers and generated good discussions. The incredibly industrious Axel Bruns live-blogged many of the presentations here.

Looking at the good quality of work on theorizing “connective action”, new forms of political communication mixing “old” and “new” media, and ten-year attempts to map out closely the changing (and variable) impact of new media use on various forms of political participation, it is clear we’ve come a long way in our understanding of the connections between internet and politics.

Some areas that I, on the basis of what I saw at this conference and what I’ve seen at others over the summer (IAMCR, ICA), think would merit more attention from researchers in the future are then—

  • The use of new ICTs by political actors beyond electoral parties and social movements—a bit of work has been done on various forms of interest groups, but this is a wide open field, and one that deserves much more scrutiny than it has received so far (my friend Dave Karpf has a book forthcoming on this, focused on the U.S., comparative work would make a great supplement to it).
  • The implications that new ICTs have for political practice and participation outside of electoral campaigns and social movement mobilization—we have a growing body of solid, cross-country work on campaigns etc, but less work has been done on how candidates, citizens, and organizes use internet tools in “peacetime”, so to speak.
  •  The ICTs themselves—there is a bit of a tendency to (and my own paper, co-authored with Cristian Vaccari, is an example of this) to focus on publicly manifest tools like campaign websites, social media, and perhaps email communications. This is important. But there is a whole other side to be examined, which is the story of the adoption of tools, of development, innovation, of trials and errors as political actors try to leverage the potential of tools that have to be mastered in practice and aren’t necessarily “just there” but have to be furnished first. (As it happens, another friend, Daniel Kreiss, has a book forthcoming on this—again, comparative work would be a great complement to his work on the Democratic Party in the U.S.)