The Sherlock Holmes Dilemma: Where’s the Evidence?

Posted By: Elizabeth Woodcock

I had the great pleasure of leading a retreat in June to kick off an access initiative for a large, multispecialty health system. The executive leader presented the focus on access as a critical component of the system's strategic plan; the co-chairs enthusiastically described their roles, and the participants formulated an action plan with priorities. At the conclusion of the all-day retreat, one of the contributors asked a compelling question after the participants began discussing how to get the departments on board with the new plan. "How do we even know this [access] is a problem?" they asked. Defining, measuring, and understanding the magnitude of the problem is essential to developing a solution. This evidence was essential to their engagement with the new initiative – and that of the other scientists (i.e., physicians) in the organization.

Finding evidence of access isn't easy.

Patient satisfaction surveys have historically been the go-to resource for leaders engaged in improving the patient experience. Yet, these surveys may not contain evidence of poor access – because the only people who complete the surveys are the ones who received the access. Many argue that post-call surveys embedded in access centers offer better, real-time feedback. Many health systems are using them with success. Yet, even that evidence is biased, as feedback is based on people who located a phone number to call – and, subsequently, were able to get to an agent and complete the call during our business hours.

Metrics such as new patient lag time and the percentage of patients scheduled within 14 days have emerged as access performance indicators. These metrics – and others like them – are captivating but suffer from challenges similar to patient surveys.

Lists of patients waiting for services – the size and lag time – are perhaps the most compelling evidence of access opportunities. Yet, this approach is not ideal for understanding the problem's totality. Why? The process of even getting on the list is an entire workflow in and of itself. A patient may call for an appointment, but triage, records, forms, test results, referrals, etc. – or whatever is "required" by the system – often represent the initial step to gain entry onto the list. Even a simple form can be daunting for many – we offer them in one language (English) typically and often require an original signature that must be mailed or faxed; even worse, we tell the patient to send us records from another provider. These instructions are simple to provide but challenging for many to accomplish. Therefore, pulling data about the list may not reflect true access challenges.

In the patient's mind, the moment a professional or trusted source states: "you [or your child] need care," - that instant is the beginning of the wait, not when the health system decides the patient is deemed appropriate for the list.  

Qualitative research offers a convincing method to gather evidence of challenges. Patients who may have experienced problems gaining entry into the system – getting an appointment, that is – can be interviewed, brought into focus groups, or invited to serve on a patient/family advisory council. But again, this cohort of patients offers a biased sample because simply to be considered a "patient" means that one has successfully gained access.

These issues present inherent biases in research: sampling and non-response. The findings are not representative of the population we are analyzing. Therefore, they cannot define the totality of the problem. The goal of the research is to produce findings from which we can all learn, yet our existing measures of access may not offer validity to support generalizable knowledge. This is not to say that these data are not important, but it is essential to recognize that these biases exist. Because the data can't describe the problem holistically, however, it doesn't mean that it doesn't exist. Like Sherlock Holmes, it's the case of the dog that didn't bark; when something is not happening, it is extremely difficult to prove.   

Researchers face similar dilemmas with other issues; maternal mortality is one example. By measuring the deaths that occur at the hospital, we gather evidence of the problem. However, it is difficult to understand the totality of the problem as a cohort of pregnant women who perish never make it to the hospital.

Even if we could account for the biases in the surveys, lists, and interviews as they relate to patient access, we may be missing the bigger picture. Patients who can afford to battle the obstacles – with resources, time, and ability -  get the appointments. Access is not only a problem, it is an urgent one. The people who can't get into our system, be embedded into our lists, or be placed onto our schedules are the ones who need us most. They represent our most vulnerable patients – the ones who need care but are deterred by the obstacles constructed by the social determinants of health – and perpetuated by the workflows that we have built. These patients can't navigate our systems or push through the barriers of institutional inequity.

We can't hear their voices; we must be their voices.

As access leaders, we are responsible for recognizing the biases in the data. We cannot – and will never – be able to produce rigorous scientific evidence about the magnitude of the problem. By presenting the data that we do have – and acknowledging its limitations - we can converse and work collaboratively with scientists to address opportunities.

This article was originally posted on 10/24/2023 on the PAC website with limited distribution.