COVID-19 DECISION-MAKERS
Contextual inquiries and in-depth interviews with Covid Act Now's most sophisticated users: local government officials, healthcare workers, school administrators, business owners, and more.
COVID-19 DECISION-MAKERS
Contextual inquiries and in-depth interviews with Covid Act Now's most sophisticated users: local government officials, healthcare workers, school administrators, business owners, and more.
Role: Lead Researcher
Company: Act Now Project (project: Covid Act Now)
Timeline: ~1 Month
Research Method: Contextual Inquiries & In-Depth Interviews
Research Type: Exploratory, Evaluative
CONTEXT
Covid Act Now built local data models and products to help governments, private organizations, and tens of millions of people make informed, science-based decisions on how to stay safe during the COVID pandemic.
By this point in 2021, I had completed exploratory research projects among both first-time users and consistent, general users of our COVID-19 data website. We did not, however, have a good understanding of how our more in-depth users utilized our tools and data to make decisions for their respective groups (e.g., schools, workforces, communities).
GOALS
Generative Goals
Understand how power users and on-the-ground decision-makers use our tools.
Design/Product Goals
Understand what tools and information power users rely on for decision-making.
Identify any pain points or gaps where our existing tools are not meeting user needs.
METHODS
Recruitment
Power users (N=11) were recruited via our email inbox-- we put out a call for participation to representatives of organizations who had reached out to us over email in the past, which were easily identified by referencing my email tracking log from my inbox analysis work. These participants included local government officials, school administrators, community organizers, healthcare professionals, small business owners, and more.
Because this is not a representative sample, we used these insights directionally only.
Semi-Structured Interview
I conducted in-depth, semi-structured interviews wherein I asked participants about what kinds of organizations they represent, the sorts of decisions they need to make, and their decision-making processes with respect to COVID-19, and probed deeper where appropriate.
This allowed me to collect standardized insights across participants while also leaving plenty of room for discovery since we did not have foundational data on this group yet.
Contextual Inquiry
After the interview, I had these participants open our website and show us their decision-making process-- which tools they use and how, the pages they visit, the statistics they reference, et cetera.
Walk us through what you typically do when you come to this website. Think out loud and explain as you go.
As they did so, I probed further to understand their motivations for each action. I also made space for users to express product requests and specific pain points.
ANALYSIS
Data Preparation
To prepare my data for analysis, I used otter.ai software to transcribe the recordings of the sessions.
Coding
I used the transcription text along with the videos themselves to code the data based on certain actions and where they occured (homepage, location page, etc.), including:
Navigation:
Did they navigate to their desired location page via the homepage map, via search, via a saved/bookmarked link, or another way?
Tools and Data used:
For how many locations are they referencing COVID-19 data? At what level (county, state)?
Which data points are they referencing (e.g., infection rate, infection rate, vulnerability level, risk level, etc.)? What are their thresholds (i.e., when is infection rate "too high"?)
Where are they accessing these data (e.g., map, location pages, Compare Table, Trends Chart, API, etc.)?
Are they using any other dashboards/tools to supplement? Why/why not?
Trend-spotting
From there, I identified patterns in types of decisions they have to make, the tools and data used to make those decisions, time spent on the site, navigation patterns, and pain points.
REPORTING
I created an in-depth report deck** summarizing the study, the participants, the key learnings, and my own recommendatiosn for next steps, which I presented at a company-wide all-hands meeting.
Following this presentation, I met with product, design, engineering, and partnerships leads to discuss the findings more deeply and create a plan for action together. See below for more details on how we implemented the insights.
KEY INSIGHTS AND HOW WE ACTED ON THEM
We learned:
Decision-makers looked to similar organizaitons for help defining their own COVID-19 strategies, and sometimes they struggled to find examples and use cases.
We changed:
We used some of these interviews to create case studies for publication on our website, in the hopes that others could learn from their use cases. We shared these widely across our social media channels and email newsletter.
We learned:
Many of these power users were spending a lot of time and effort manually creating their own spreadsheets daily or weekly by visiting many location pages and copying over the data points one-by-one. They did not know we offered a customizable API for this purpose and had not found it themselves until I showed it to them, indicating a major lack of discoverability.
We changed:
We added the API to our menu's featured items to increase discoverability and highlight what it offers.
We saw significantly more traffic to the API after this change was made, and received positive feedback in our partnerships channels.
We learned:
Some of the participants were unaware of our Trends chart (see below), which allows users to compare trend lines of multiple locations for various COVID-19 metrics. Once they tried it, they indicated it was highly useful.
Among the users who had tried it, they expressed desire for more customization options, including adding more metrics to the chart and adding the ability to change the time window.
We changed:
We added a Trends chart tool to the homepage (instead of only having them on location pages) to increase discoverability. Engagement with the Trends charts increased significantly after these changes, indicating their effectiveness.
We also added more customization options to the Trends chart tool to allow users to compare trend lines for more metrics, as well as to customize the time window shown. We received positive feedback on this in our parnerships channels once it went live.
Other learnings for our user knolwedgebase:
Our key value-adds for power users is the ability to compare data across different locations.
While they each use different data to make COVID-related decisions, they see data as actionable only when compared to other locations of interest, not in its own right (e.g., they see things as "dangerous" when their county is X% worse than the neighboring county, not necessarily when their infection rate is Y high).
They like that our data are standardized across locations, and that we have many tools (e.g., Compare table) that make comparison straightforward.
They also value having access to trustworthy data and methodology and feel ours meets their standards. Some cross-reference with other dashboards (often the New York Times or Johns Hopkins), but in general they trust our data standalone and prefer not to have to spend time using multiple sources.
**Please note: Due to confidentiality requirements, I am unable to share more specific details or internal documents (e.g., exact research plans, readout decks, data graphics) at this time.