By Matt Phillion
A global pandemic can derail even the best of plans, but when those plans include face-to-face, in-person field visits, a sudden worldwide shift to remote work can make things all but impossible.
Shannon Ford, senior manager of human-centered design with Alluma, spoke at Health Experience Design 2021 recently to discuss how her team used technology and adaptability to conduct field visits when in-person observation became untenable.
The field visits were part of in-context observation, shadowing, and interviews, all meant to help put together an application and implementation enrollment process for a healthcare coverage system.
“The program is designed to provide primary coverage to adults living in 35 California counties,” said Ford. It addresses a patient population of adults who earn too much to qualify for Medicaid, but don’t make enough to afford healthcare coverage through the ACA marketplace—in essence, providing a stopgap between Medicaid and other coverage opportunities.
The program uses assisters working in clinics and other settings to help with applications and renewals, working with patients to find the best program for their needs. The goal is to help patients get their care in the clinic covered financially, whether through Medicaid, SNAP, or even the clinic’s own programs.
“These assisters are incredibly dedicated and bend over backward … to make sure these people get the care they need,” said Ford.
The original plan was to conduct field visits—shadowing assisters and then conducting semi-structured interviews with assisters, supervisors, the program managers who make the decisions for which coverage to support, and even the front desk staff—to get a feel for everyone who works with these patients along the way.
“We wanted a full picture of anyone touching that patient, helping them connect to these programs,” said Ford.
The shadowing was mostly observational, such as observing the assister helping with shopping for an apartment, or following a family through a morning or evening routine. The observer might interrupt with a clarifying question, but otherwise would hang back. The goal was to paint a picture of how the assisters interact with spaces, what other people they may collaborate with, and what tools they need to do their job, while also questioning what they’ve seen, which can’t be done in a purely observational setting. The semi-structured interview would have some predefined topics but remain open ended, allowing the participants to bring up topics the researchers might not otherwise know to raise.
“The interviews are inductive in nature, a bottom-up reasoning process. We don’t have a hypothesis we’re trying to prove but rather trying to find a hypothesis based on the interview,” said Ford.
Vitally, the interviews needed to take place in context, such as in a home or workplace. “[In-context interviews are] important because if you’re talking to someone, there’s so much you do and use that have become habit that you don’t even think about them anymore. As a researcher, I see things in the environment you might not think to tell me about out of context,” said Ford.
The plan was in place and ready to go, but “by the time the ink was dry and everyone got board approval, we ended up in COVID,” said Ford. “We wanted to go into the field last summer, toward the end of July, and this took in-person clinic visits out of the picture. The assisters were working from home as well.”
The situation could have presented an impossibility. “There really was no opportunity to go into the clinics, and not much happening there anyway. But we started thinking: What if we did it virtually?”
Changing to virtual
Remote research was not new, Ford noted: It had been used for usability testing, small-group work, and more. But for field testing, the whole point was to be there in the room, to watch the assisters and clinics interacting, see the movement of people and paperwork through the office, observe processes, and more. “We were a little uneasy with the idea and how we could accomplish that remotely,” said Ford.
They began with video conferences and screen sharing, or working with assisters and putting the clients on speakerphone. The clients were told ahead of time that there would be researchers shadowing the appointment, and they were reassured that it was up to them to grant permission for the researchers to be there.
“We went through the typical prep, sending a three-page document to our clinics explaining who we wanted to talk to and what they could expect,” said Ford. “We’d then get on a conference call with the coordinator, talk about the technology that would be involved, which staff we should talk to. We’d then set up sessions and create meetings, and the invites would include instructions on how to use the application we were using.”
The process, which evolved over time, had both challenges and successes. Ford started her conference presentation with a look at the former. “There were two kinds of challenges: one around tech … and one around context,” she said. Neither came as a surprise.
The first technology challenge involved the chosen research platform. They asked clinics for permission to record the shadowing session, but were not able to obtain that permission. Then, when conducting their validation dry run, they found that they could not share their screen without recording, so they were forced to use a different technology.
“We’d also been planning on using [the platform] for the interviews,” said Ford. “But despite talking about tech during our initial site meetings, we ran into in-the-moment issues where we couldn’t connect through computer audio, and the assisters and clients had to call in on a phone line.”
The app had no way of doing this easily, and so they did a “quick punt” to Webex®, which let them set up audio and video—but when a participant didn’t have video enabled, the screen display wasn’t optimal. Still, “we rolled with it as best we could with the tools we had,” said Ford.
In terms of context challenges, in most cases observers could use the webcam to see the room for context or ask the participants to hold up items on camera. “We could actually see those things pretty well in those cases, and even for those who were using the phone, though the image was very small, we could still see the context,” said Ford. While not ideal, it was workable, and they were able to obtain important data for the research with various on-the-fly improvisations.
The successes, like the challenges, were twofold.
First, “we had an interesting view of clinic operations during COVID-19 we wouldn’t have gotten if we didn’t decide to do our research this way,” said Ford. They were able to observe the clinic assisters sitting with patients, finishing applications, putting data in, collecting validation documents, and sending that bundle of documents into the client system.
Second, this observation yielded key insights that drove functionality in their system.
“On the client side, we have an eligibility worker who looks over the documentation to make sure it’s in order, then approve or deny,” said Ford. “The eligibility worker has about 30 days to make that turnaround. Waiting for those results was more of a source of frustration for the assisters. We came back with a clear imperative to make that determination process transparent, so they could see what applications were being worked on, how long they had before the time was up—basically, when they should start calling the eligibility worker to ask what’s happening.”
The speed to decision was more important than the speed to application, Ford explained.
“My organization was focused on how to make the application process go as quickly as possible: asking for the least amount of information, making it easy to attach documents,” she added. “We were paying less attention to making the turnaround time go faster.”
Meanwhile, COVID-19 was making collecting the verification documents even more painful than normal, and “normally it’s very painful to document,” she said. One assister explained that clients were sending her texts with their documents, but she had a flip phone for work—so she’d then have to forward the message to her private phone, and then forward it again to email. COVID-19 was forcing assisters to cobble together a system to get things done.
Takeaways from moving to virtual
Any move to a remote solution requires a look at the technology involved. “We usually give time and attention to tech, but in these times, [it’s important to] give even more time and attention to tech,” said Ford.
Planning and dry runs go a long way, she said. Allow more time before every session for tech wrangling, pre-check meetings, and testing the setup to make sure your sound and video are ready to go. “And have backup plans if your technology doesn’t work in the moment,” said Ford.
Additionally, find other ways to observe contextual data. “During our planning, we thought about asking assisters to give photo tours of their workspaces,” said Ford. “We held off, but it would have been a fruitful way to get more data about the environment they are working in.”
This could be done through more pointed questions, as well. “Ask: ‘Tell me about your setup.’ ‘What is on your desk you use constantly?’ ” said Ford.
She also recommended using a more purpose-built tool to document tool usage over the course of a workweek. “This makes it easier to push those tasks out and [can] help to organize visual data you get back,” said Ford.
Overall, the move to remote field visits has paid off.
“You can get value out of doing this work virtually,” said Ford. “I would still rather be in person, but I’m glad we did this. We have to be flexible and OK with less-than-perfect data.”
Matt Phillion is a freelance writer covering healthcare, cybersecurity, and more. He can be reached at firstname.lastname@example.org.