Audience

Audience Research at The Atlantic: How We Use It — and What We Don’t Expect It To Do for Us

Emily Goligoski Senior Director, Audience Research

User experience research represents a mindset: one that expresses that our work is stronger when it’s informed by the people who will ultimately use it.

It’s also a regular practice and a set of qualitative methods to fill in gaps in organizations’ own understanding. And it’s expanded greatly as a discipline across industries in the past five years. “UXR” is now widely employed by organizations ranging from consumer packaged goods manufacturers to social networks to publishers.

My colleagues at The Atlantic focus on reporting incredible stories, creating new ways to experience our journalism, and driving our consumer and advertising businesses. My work in research considers the uses of our coverage in our readers’ lives, by getting to know discerning readers and reporting back on insights. Together with collaborators I spend the equivalent of months per year learning about the news needs, digital preferences, and hopes for independent journalism of hundreds of smart people around the world.

A major portion of our time is spent getting to meet current, potential, and former supporters of our journalism. We meet them individually and in small groups. We primarily ask research participants questions alongside our colleagues in product; design; editorial; customer marketing; and engineering.

“We see over and over that our work is stronger when we listen to the people who use our journalism.”

As the publication’s first in-house researcher, I want to share lessons on what audience research is and isn’t useful for in my experience. When should you put it to work for the best outcomes? When are other approaches more likely to serve your team? You’ll find some theory here, yes, but also examples of how we’ve used promoted practices in our recent research with readers.

Why we talk to & study people (please don’t call them users)

The qualitative methods we use most regularly are 1×1 in-depth interviews, small group conversations, usability tests, and surveys. They are good for:

  • Gathering clarifying insights as we ask the question: Who are our readers, listeners, and viewers — and how can we better serve them? Sometimes the answers complicate our existing understanding of who spends time and money with us and why. Research results can be less convenient than we hope when we set about to answer questions like How can we become a trusted part of a reader’s week? But they help us create news products and experiences that people actually want.
  • Calibrating whom we should listen to and why, especially when it comes to unsolicited feedback. This means being purposeful about improving the ratio of signal to noise. Reporters and editors receive plenty of inbound communications from Twitter and email alone. When we counterbalance that feedback by soliciting the opinions of people who may not be otherwise inclined to reach out, we often gather a more nuanced set of reactions than those with extreme positive or negative points of view.
  • Being proactive. We want to know how people interact with our editorial experiences as they live in the world. But we also want to identify unmet needs (what researchers call “discovery”) for future coverage and product development.

Audience research can also be useful when exploring early concepts and iterating on existing ones. But it has a shelf life: We shouldn’t take the results as fact forever, as our audiences’ habits and needs change over time and their life circumstances. But well before insights get stale, there are other limitations, too.

When audience research may not be your answer

There are a few instances when you may be better served not taking your questions to audiences first. At a minimum, having realistic expectations about the limitations of audience research is wise here:

  • When you’re having trouble making cross-department decisions. Tie-breaking isn’t the most thoughtful use of the time, care, and money that audience research entails. This idea was shared by my designer colleague Quinn Ryan and I think it’s very accurate.
“When should you consider another method? When you’re having trouble making cross-department decisions or know you aren’t actually going to use the insights.”

  • When your organization wants to be perceived as one that listens, but actually isn’t going to use the findings. User experience research for purposes of show is a waste of your current, prospective, and past audiences’ valuable time.
  • When you want to solicit positive feedback to make yourselves feel good. This is just one of the many biases one of the many biases we all have when we set out to uncover what our readers think about our work! But we’re looking for real reactions, not book blurbs. It’s hard to hear critical feedback, but it’s necessary for making work that serves more people and does so well. Conducting research that begins with a bias (even if that bias is “we have a killer idea and great execution”) is worse than not going into field at all.
  • When you want to choose between different calls to action. Because our research team’s studies often involve small numbers of people (up to 20 individuals when conducting interviews and up to 1,000 or more for surveys) responding to detailed inquiries, it’s useful to know when A/B testing is a better fit for answering your team’s questions. If I ask 12 people whether they like the first or second copy option, the information that comes back is too subjective and the sample size too small to be useful.
Our audience research principles & practices at The Atlantic

When I came to The Atlantic nine months ago, my colleagues and I wanted to clarify our research organization’s reason for being as a new strategic discipline inside the company. I drafted a set of research practice principles that were useful to bring to early conversations with new colleagues. They continue to be helpful when we bring new teammates onto projects that involve research. These assumptions and what they look like in practice are:

1. Human-centered.

We exist to advocate for readers and to help The Atlantic grow. Research is intended to be actionable. Much of our research sample sizes won’t reach statistical significance. Still, we are rigorous in seeking external signals.

Like the wider organization, we aim to be writerly, thoughtful, clear, and self-aware in our interactions. When we first talked to a limited number of Atlantic readers about the concept for a large-scale project about conspiracy theories six months before publishing it, they were hopeful for the project that would become Shadowland — but somewhat skeptical that the ambition for this project could be completed “with an Atlantic eye.”

They told us it was important to nuance the arguments and help explain why, not just how, these theories convince and captivate people. They were clear that we should “be wary of confusing complicity and conspiracy” (or, separate what is an effect of capitalism from that which is more sinister): guidance that was shared with the project’s editor, participating reporters and designers, and the product and engineering teams.

2. Multi-methodological.

There are many approaches we can use — and some we shouldn’t, depending on what we are looking to learn. We plan the most appropriate research means based on the project, timeframe, and collaborating teams.

In the discovery phase of an article redesign project recently, I worked with a multidisciplinary team of staff that carried out three related streams of research. Over two months we conducted market research (including hosting interviews with designers, developers, and product managers at peer organizations and auditing competitors’ sites); reader interviews (to understand points of frustration with our current article pages); and internal stakeholder conversations (to know current limitations and future goals of people who use our publishing tools). Reader interviews alone would have provided an incomplete picture on a project that needs to account for the needs of people inside and outside the organization.

3. Plugged in.

Audience research isn’t an island. Nor is it primarily a usability testing operation.* It’s a strategic partner that employs many of our organizational priorities, including publishing work of consistently high quality and getting it to more people.

“Audience research is a strategic partner that employs many of our organizational priorities, including publishing work of consistently high quality and getting it to more people.”

With our publication’s November 2019 print and digital redesign — the first in years, and comprehensive across all of the places where individuals see our brand — we anticipated significant feedback from subscribers and non-subscribers. We also relaunched our iOS app with a new home screen that reads like a newsletter/homepage/magazine hybrid. It’s highly visual, published as crafted editions, and provides a jumping-off point for The Atlantic’s reporting and ideas.

More Perspectives—