POWER USER CONTEXTUAL INQUIRIES
Contextual inquiries and in-depth interviews with Covid Act Now's most sophisticated users.
Role: Lead Researcher
Company: Act Now Project (project: Covid Act Now)
Timeline: ~1 Month
Research Method: Contextual Inquiries & In-Depth Interviews
Research Type: Exploratory, Evaluative
CONTEXT
Covid Act Now built local data models and products to help governments, private organizations, and tens of millions of people make informed, science-based decisions on how to stay safe during the COVID pandemic.
By this point in 2021, I had completed exploratory research projects among both first-time users and consistent, general users of our COVID-19 data website, covidactnow.org. We did not, however, have a good understanding of how our more in-depth users utilized our tools and data to make decisions for their respective groups (e.g., schools, workforces, communities).
GOALS
Generative Goals
Understand how power users and decision-makers use our tools.
Design/Product Goals
Understand what tools and information power users rely on for decision-making.
Identify any pain points or gaps where our existing tools are not meeting user needs.
METHODS
Recruitment
Power users (N=11) were recruited via our inbox-- we put out a call for participation to representatives of organizations who had reached out to us over email in the past, which were easily identified by referencing my email tracking log from my inbox analysis work. These participants included local government officials, school administrators, community organizers, healthcare professionals and more.
Because this is not a representative sample, we used these insights directionally only.
Semi-Structured Interview
I conducted semi-structured interviews wherein we asked participants about the sorts of decisions they need to make, what kinds of organizations they represent, and their decision-making processes with respect to COVID-19.
Contextual Inquiry
I then had these participants open our website and show us their decision-making process-- which tools they use and how, the pages they visit, the statistics they reference, et cetera.
Walk us through what you typically do when you come to this website. Think out loud and explain as you go.
As they did so, I probed further to understand their motivations for each action. I also made space for users to express product requests and specific pain points.
ANALYSIS
Data Preparation
To prepare my data for analysis, I used otter.ai software to transcribe the recordings of the sessions.
Coding
I used this transcription text along with the videos themselves to code the data based on certain actions and where they occured (homepage, location page, etc.), including:
Navigation:
Did they navigate to their desired location page via the homepage map, via search, via a saved/bookmarked link, or another way?
Tools and Data used:
For how many locations are they referencing COVID-19 data? At what level (county, state)?
Which data points are they referencing (e.g., infection rate, infection rate, vulnerability level, risk level, etc.)? What are their thresholds (i.e., when is infection rate "too high"?)
Where are they accessing these data (e.g., map, location pages, Compare Table, Trends Chart, API, etc.)?
Are they using any other dashboards/tools to supplement? Why/why not?
Trend-spotting
From there, I identified patterns in types of decisions they have to make, the tools and data used to make those decisions, time spent on the site, navigation patterns, and pain points.
KEY INSIGHTS AND HOW WE ACTED ON THEM
We learned:
Many of these power users were spending a lot of time and effort manually creating their own spreadsheets daily or weekly by visiting many location pages and copying over the data. They did not know we offered a customizable API for this purpose and had not found it themselves until I showed it to them, indicating a major lack of discoverability.
We changed:
We added the API to our menu's featured items to increase discoverability and highlight what it offers.
We saw significantly more traffic to the API after this change was made.
We learned:
Some of these users were unaware of our Trends chart (see below), which allows users to compare trend lines of multiple locations for various COVID-19 metrics.
Among the users who had tried it, they expressed desire for more customization options, including adding more metrics to the chart and adding the ability to change the time window.
We changed:
We added a Trends chart to the homepage (instead of just having it on location pages) to increase discoverability. Engagement with the Trends charts increased significantly after these changes, indicating their effectiveness.
We also added more customization options to the Trends chart tool to allow users to compare trend lines for more metrics, as well as to customize the time window shown.
Other learnings for our user knolwedgebase:
Our key value-adds for power users is the ability to compare data across different locations.
They like that our data are standardized across locations, and that we have many tools (e.g., Compare table) that make comparison straightforward.
While they each use different data to make COVID-related decisions, they see data as actionable only when compared to other locations of interest, not in its own right (e.g., they see things as "dangerous" when their county is X% worse than the neighboring county, not necessarily when their infection rate is Y high).
They value having access to trustworthy data and methodology and feel ours meets their standards. Some cross-reference with other dashboards (often the New York Times or Johns Hopkins), but in general they trust our data standalone and prefer not to have to spend time using multiple sources.