Patient-Centered Contraceptive Counseling (PCCC) scale reduction and face validity testing

Reducing an 11-item scale for research contexts to the 4-item scale for performance measurement

The PCCC is a shortened version of a scale (the Interpersonal Quality of Care scale [IQFP]) that PCRHP originally developed as a patient-reported outcome measure in the context of contraceptive research. The IQFP was designed to capture the three domains of quality contraceptive counseling found to be important to patients in qualitative research, including interpersonal connection between provider and patient, adequate information, and decision support¹. The final 11 items included in the IQFP were selected using factor analysis from 17 initial items, and this 11-item measure was found to be a reliable measure with content, construct, convergent, predictive and discriminant validity².

In order to shorten the IQFP for use as a performance measure, we used a process in which qualitative data addressing item importance and interpretability, as well as equivalence between English and Spanish, was triangulated with quantitative data addressing reliability and validity (see Figure 1). This process led us to a 4-item scale retaining the validity and reliability of the 11-item IQFP.

Figure 1. Qualitative and quantitative data triangulation for item reduction of the initial IQFP to the PCCC

 

Systematic assessment of face validity of the 4-item scale

Modified Delphi Processes

We assessed face validity of the performance measure score with facility administrators, providers of contraceptive counseling, and patients. We assessed face validity with administrators and providers by conducting two Modified Delphi Processes via e-mail (one with a group of 14 administrators, and one with a group of 19 providers) with participants from facilities across the country. Each Modified Delphi Process used two rounds of both close-ended and open-ended questions to collect feedback from each group on the performance measure, with the second round of questions reflecting feedback from the first round in order to move towards consensus on face validity. Each Process asked participants to reflect on the acceptability of items, whether they thought a dichotomized top-box score of the item responses would accurately reflect their performance, and the applicability and utility of a performance score to their work. Each Modified Delphi Process resulted in consensus that the performance measure score was useful in differentiating high quality and lower quality care, and that the performance measure score would be of use to the work of administrators and providers. Among providers, 90% indicated that they would be likely to consider a provider receiving a higher score on this measure to be providing better care (giving a response of at least 7 on a scale from 1 to 9, from very unlikely to very likely). Ninety-two percent of administrators gave a response of at least 7 on the same item. With regard to usefulness, 88% of providers and 93% of administrators agreed, based on a response of at least 7 on a scale of 1 to 9, that reporting the percentage of responses that were top-box scores would be understandable as an indicator of performance and meaningful for quality improvement.

Interviews and Focus Groups

We conducted interviews and focus groups with 43 patients from across the country to assess the face validity of the measure with this group. In order to obtain diverse representation in this qualitative sample, we recruited focus group and interview participants who had recently experienced contraceptive counseling with the assistance of health care facilities serving diverse patient populations in three states (Texas, North Carolina, and California). We asked patients about the acceptability of items and the utility of a dichotomized top-box score of item responses for their decision-making about their health care. We also assessed any differences in patient preference and response between paper and electronic versions of the survey. Interviews and focus groups revealed that patients found the items and performance measure score were acceptable and useful to them. Eighty-eight percent of patients reported that a facility or provider having a higher score on the performance measure would make them more likely to choose that facility or provider for their care, as opposed to having no effect on their decision-making about their care. Patients responded to the measure in an equivalent fashion in both paper and electronic versions. The equivalence in patient response to paper and electronic versions of the survey was further supported in data collection for reliability and validity testing. One facility in this testing used both paper and electronic surveys with different patients (implementing first electronic and then paper surveys in sequence), and very similar percentages of patients gave the top-box score by each modality in this facility (87.5% on paper and 86.1% electronically [difference not significant]).

Patient Stakeholder Engagement

Throughout the process of assessing face validity and the real world pilot test of the measure in survey form, we met with a group of seven patient stakeholders from the Bay Area – our local community. This group met with the research team every other month for four years. They provided crucial feedback to the research team in the design of the interview guides and the survey itself. They vetted the look and feel of the survey, and honed in on wording for the survey introduction and prompts. Their input helped produce materials beyond the survey, such as patient FAQs and posters, to maximize the response rate during the pilot testing in clinics across the country.

  1. Dehlendorf C, Levy K, Kelley A, Grumbach K, Steinauer J. Women's preferences for contraceptive counseling and decision making. 2013;88(2):250-256. 
  2. Dehlendorf C, Henderson JT, Vittinghoff E, Steinauer J, Hessler DJC. Development of a patient-reported measure of the interpersonal quality of family planning care. 2018;97(1):34-40.