Development of the Patient and Public Involvement Questionnaire (PPIQ) for Canadian Formulary Committees
Lee Verweel (a) | Rachel Goren (a) | Ahmed M. Bayoumi (b, c, d, e) | Kelly K. O’Brien (b, f, g) | Elaine MacPhail (h) | Tamara Rader (i) | James H. Tiessen (a) | Zahava R.S. Rosenberg-Yunger (a)
[Download PDF to access exhibits.]
ABSTRACT
Background: Limited evidence exists on how to evaluate patient and public involvement in setting priorities for healthcare within formulary recommendation committees. This needs to be addressed to create evidence-based patient and public involvement policies. Objective: To develop a Patient and Public Involvement Questionnaire (PPIQ) and describe its development. Methods: The PPIQ was developed using mixed methods including the following phases: item generation and refinement (item bank creation, user feedback sessions); sensibility testing (using Feinstein’s criteria and interviews); and pilot testing. Results: The PPIQ draft was created using a bank of 846 items. This was refined by user feedback sessions (n=7). Another group of participants (n=21) completed a sensibility questionnaire, with a median score 6 out of 7 on 80% of items indicating sensibility. Follow-up interviews (n=14) indicated the PPIQ was clear and appropriate. Pilot testing (n=14) had an average response rate of 25% and a completion time of 19:00 minutes (SD ± 13:46). Conclusions: The PPIQ may be used by agencies to evaluate patient and public involvement in Canada and assist in developing evidence-based patient and public involvement policies.
AUTHOR AFFILIATIONS: a. Health Services Management, Ryerson University, Toronto; b. Institute of Health Policy, Management and Evaluation, University of Toronto; c. MAP Centre for Urban Health Solutions and Division of General Internal Medicine, St. Michael’s Hospital, Toronto; d. Department of Medicine, University of Toronto; e. Division of General Internal Medicine, St. Michael’s Hospital, Toronto; f. Department of Physical Therapy, University of Toronto; g. Rehabilitation Sciences Institute (RSI), University of Toronto; h. Former Senior Advisor, Canadian Agency for Drugs and Technologies in Health (CADTH), Ottawa; i. Canadian Agency for Drugs and Technologies in Health, Ottawa.
SUBMISSION: June 14, 2021 | PUBLICATION: July 26, 2021
DISCLOSURES: The authors declared no conflicts of interest. Ahmed Bayoumi was supported by the Fondation Baxter and Alma Ricard Chair in Inner City Health at St. Michael’s Hospital and the University of Toronto. Kelly K. O’Brien is supported by a Canada Research Chair (Tier 2) in Episodic Disability and Rehabilitation. This research project was supported by a Canadian Institutes of Health Research Operating Grant (136887).
CONTRIBUTIONS: All authors made valuable contributions to the study. ZRY was the PI and responsible for the research activity planning and execution. AB, EM, KKO contributed to the research design of the project. LV, RG and ZRY performed the data collection and analysis. AB, EM, JT, KKO, and TR reviewed and provided feedback on the analysis. LV, RG and ZRY were responsible for writing the original draft. AB, EM, JT, KKO, and TR critically reviewed the manuscript for content and approved the final manuscript.
ETHICS APPROVAL: This research was approved by Research Ethics Boards at Ryerson University (Toronto, Ontario, Canada [REB# 2015-079]) and University of Toronto (Toronto, Ontario, Canada [REB #30921]). All participants provided their consent to participate as well as their consent for publication. The authors elected to not share data in order to retain the confidentiality of research participants. The authors acknowledge and thank all research participants.
CITATION: Lee Verweel et al (2021). Development of the Patient and Public Involvement Questionnaire (PPIQ) for Canadian Formulary Committees. Canadian Health Policy, July 2021. ISSN 2562-9492 www.canadianhealthpolicy.com
Introduction
There has been a growing global commitment by policy-makers and providers to improve the quality of health-related decision making by incorporating greater patient and public involvement (Boote, Telford, & Cooper, 2002; Conklin, Morris, & Nolte, 2015; Gauvin, Abelson, Giacomini, Eyles, & Lavis, 2011; Pivik, Rode, & Ward, 2004; Rosenberg-Yunger, Daar, Thorsteinsdóttir, & Martin, 2011). However, despite this increase in involvement, there is limited evidence of how this involvement affects health care priority setting (Bruni, Laupacis, & Martin, 2008; Rosenberg-Yunger, Thorsteinsdóttir, Daar, & Martin, 2012). This limitation challenges the creation of effective policies for optimal patient and public involvement. The discordance between evidence and policy are not uncommon to health policy decision makers, for example in the area of health technology assessment (Malekinejad, Horvath, Snyder, & Brindis, 2018; Tunis, 2013; van de Goor et al., 2017).
Evidence has shown that issues raised by patients and patient advocacy groups are reflected in drug-related health technology assessment recommendations and hospital-based assessments (Berglas, Jutai, Mackean, & Weeks, 2016; Dipankui et al., 2015). This evidence demonstrates that patients and patient advocacy group perspectives were often considered and that they enhanced the content and framing of the recommendations (Berglas et al., 2016; Dipankui et al., 2015). However, inclusion of patients’ insights is only one facet of measuring meaningful participation; it does not address for example, the nature and quality of their participation in the process of committee deliberation (Berglas et al., 2016; Dipankui et al., 2015). Evaluating the effect of patient and public involvement poses several challenges. These include determining and evaluating “successful” involvement, accounting for the effect of power differentials between different types of experts, and determining whether resources to support involvement are sufficient (Rosenberg-Yunger & Bayoumi, 2017).
The aim of this paper is to address the above limitation by providing a method of obtaining evidence which can be used to create evidence-based patient and public involvement policies. Specifically, this paper describes the development of the Patient and Public Involvement Questionnaire (PPIQ), a tool to evaluate the degree of involvement of all committee members, including patients and members of the public, in decision making within the specific context of priority setting in formulary recommendation committees. Our phases included the following: 1) generating and reducing PPIQ items (i.e., questions); 2) assessing sensibility, and 3) pilot testing of the PPIQ.
Design
We used both quantitative and qualitative methods to develop, refine, assess, and pilot the PPIQ with stakeholders involved in priority setting in formulary recommendation committees. (FIGURE 1). We included the perspectives of committee members, patient group representatives, public drug plan employees and academic experts (national/international) in patient and public involvement in healthcare.
Conceptual Approach
The development of the PPIQ built upon previous work by authors ZRY and AMB who developed nine criteria to evaluate patient and the public involvement in health care resource allocation decision making (Rosenberg-Yunger & Bayoumi, 2017). In this work, authors ZRY and AMB conducted a literature review of studies describing the development or evaluation of questionnaires to measure public or patient involvement in public decision making (including non-health related fields). Additionally, the authors conducted key informant interviews with representatives of patient groups, past or present government employees, representatives from Canadian Provincial Ministries of Health, advisory committee members and industry personnel. These nine criteria, outlined in TABLE 1, guided the refinement and organization of the PPIQ throughout development.
Phase 1: Item generation and reduction
1.1 Item Bank
The literature review and key informant interviews (methods described in detail: Rosenberg-Yunger & Bayoumi, 2017) were used in the current work to inform the first phase of content development for the PPIQ. The interviews and literature review provided a dataset that we used to develop items. We coded the dataset in an inductive line-by-line manner. We considered each code individually to develop the preliminary list of items (i.e., the item bank). Two members of the research team (ZRY and LV) reviewed the item bank to reduce the number of redundant items, identify missing items, and ensure items adequately addressed the purposes of evaluating patient and public involvement based on the nine criteria of public and patient involvement.
1.2 User Feedback (Focus Group)
We conducted one in-person focus group discussion with Canadian Agency for Drugs and Technologies in Health (CADTH) staff, representatives from patient groups, members of drug advisory committees, government employees, and industry personnel to capture stakeholder feedback on the PPIQ (Version 1). We identified informants from websites listing drug advisory committee members, and suggestions by staff in Ministries of Health. Written informed consent was received from all participants either in person or electronically.
Focus group participants were asked to review the draft PPIQ (Version 1), eliminate redundant items, add missing items, and ensure each of the nine criteria of public and patient involvement was adequately captured in the questionnaire. Additionally, we asked participants to assess ambiguity and to check for double-barreled items. Conflicting views on which items to retain or delete were addressed at this stage by retaining these items. The final decision to retain or delete items was made after the focus group data was analyzed. The focus group discussion was audio-recorded and transcribed. Two team members (ZRY and LV) analyzed the focus group transcript independently using a qualitative thematic approach resulting in suggestions to add, retain or remove items from the PPIQ. Any discrepancies in coding were discussed and resolved through consensus. We revised items based on recommendations from the focus group to result in a refined PPIQ (Version 2).
1.3 User Feedback (Online Survey)
A second group of participants, similar to the first group, provided feedback on the PPIQ (Version 2) by answering questions about questionnaire refinement (including item generation and reduction) in an online survey using Opinio (Copyright 1998-2019 ObjectPlanet). Participants were also asked to provide electronic feedback by reviewing items within the PPIQ as a Microsoft Word document, which was emailed to each participant. Participants were asked to make comments and track changes throughout the PPIQ Word document. Two team members (ZRY and LV) coded the open-ended survey responses and discussed and resolved any discrepancies in coding through consensus. The comments and track changes by each participant helped inform further refinement of the PPIQ not captured in the survey responses. We revised the PPIQ based on the online surveys and electronic feedback to create PPIQ (Version 3).
Phase 2: Sensibility Testing
We assessed the sensibility of the PPIQ (Version 3) with members of formulary funding recommendation committees, specifically, the pan-Canadian Oncology Drug Review (pCODR) Expert Review Committee (pERC), and the Canadian Drug Expert Committee (CDEC) using a web-based survey questionnaire followed by a telephone interview. We recruited a purposive sample from the Drug Policy Advisory Committee, which is composed of representatives from the federal, provincial, and territorial publicly funded drug plans and other related health organizations. Additionally, we recruited patient group representatives. Contact information was gathered through public internet searches. International academic experts were identified through internet searches for university websites. Committee members were identified through public information on the committee web page. Patient group representatives were identified from publicly available information on patient group websites. Written informed consent was received from all participants electronically.
2.1 Sensibility Questionnaire
We assessed the PPIQ’s sensibility using Feinstein’s sensibility framework, which includes purpose and framework, overt format, face and content validity and ease of use (TABLE 2) (Feinstein, 1987b). We used the framework to develop a sensibility questionnaire (BOX 1) that included 9 items for which participants rated their level of agreement about face and content validity and ease of use of the PPIQ, using a seven-point ordinal scale. Higher responses indicated a higher degree of sensibility (Feinstein, 1987a). We administered the PPIQ and sensibility questionnaire electronically to participants using Opinio.
Methods for sensibility testing were informed by previous work conducted by O’Brien et al and Rowe and Oxman (O’Brien et al., 2013; Rowe & Oxman, 1993). We considered the PPIQ to be sensible if median Likert scale scores for 80% of the items were 5 or higher (on a 7 point Likert scale) on the sensibility questionnaire (phase 2.1). Statistical analyses were performed using R (“R version 3.2.0”, 2019).
2.2 Sensibility Interviews
All participants who completed the sensibility questionnaire were invited to participate in a structured telephone interview to determine how well the PPIQ evaluated patient and public involvement on committees when making recommendations regarding formulary listing of drugs for public funding. Interviews were conducted within three weeks of completing the online sensibility questionnaire. We developed the interview guide using Feinstein’s framework; specifically, the interview questions asked about participants’ opinions on face and content validity (item generation, item wording, and item reduction) and ease of use of the PPIQ (BOX 2). Additionally, we asked if participants were part of a committee, and if so, whether the PPIQ adequately described their committee involvement experience and we asked them for suggestions to refine the PPIQ to better capture their experiences. Two team members conducted the interviews (ZRY and RG). Interviews were digitally recorded and transcribed verbatim.
We analyzed the interviews using a qualitative thematic approach consisting of line-by-line coding to develop categories (Denzin & Lincoln, 2005) that pertain to the Feinstein framework of sensibility. The first eight transcripts were coded by two team members (ZRY and LV) and any discrepancies were discussed and resolved through consensus to develop the preliminary coding scheme. Subsequent transcripts were coded by one team member (LV). We used NVivo 12 software (QSR International, 2012) to assist with data organization and management. In collaboration with the research team and knowledge user (i.e., CADTH), results from sensibility testing were used to further refine the PPIQ (Version 4).
Phase 3: Pilot Testing
The aim of the pilot test was to assess elements of feasibility, including the number of items rated ‘not applicable’, the time to complete survey, and the response rate of committee members. We pilot tested the PPIQ (Version 4) with members from Canadian formulary funding recommendation committees, specifically, Alberta’s Expert Committee on Drug Evaluation and Therapeutics, Atlantic Expert Advisory Committee, British Columbia Drug Benefit Council, CDEC, Drug Advisory Committee of Saskatchewan, L’Institut national d’excellence en santé et en services sociaux (INESSS), and pERC. We identified potential participants by purposively sampling from existing formulary funding recommendation committees across Canada and we recruited through email. We electronically administered the PPIQ by emailing a link to participants using Opinio. Consent was obtained online prior to testing the PPIQ. We analyzed item responses to the PPIQ descriptively across five response options (‘strongly agree’, ‘agree’, ‘neutral’, ‘disagree’, and ‘strongly disagree’). Items that participants rated as ‘not applicable’ were treated as their own category across criteria.
Results
Phase 1: Item generation and reduction
Item Bank
The preliminary item bank had a total of 846 items. This was refined internally resulting in an initial draft of the PPIQ that had 85 items. After reviewing the 85 items with the larger team, we reduced the PPIQ to 71 items, which were then organized according to the nine pre-specified criteria (TABLE 1) and presented to stakeholders for feedback.
We then organized the item bank putting similar items together (i.e., patient and public involvement, patient group submissions, role of the chair, etc.). We created brief instructions which outlined the purpose of the PPIQ, and a Likert scale with seven response options labelled from very strongly agree (0) to very strongly disagree (7). This resulted in an initial draft of the PPIQ (Version 1) to be presented to users for feedback.
User Feedback (Focus Group)
Three people participated in the first focus group session (a committee member, patient group representatives, and a knowledge user partner representative). Participants commented on clarity of questions and instructions, appropriateness of response options, and missing questions. Participants discussed the issue of clarity as related to both the instructions of the PPIQ and specific items. Participants identified a need to better distinguish between the use of clinical and patient evidence. Participants identified a need to better distinguish between the use of clinical and patient evidence and also suggested the inclusion of a “not applicable”.
Feedback from this focus group was reviewed by the research team and knowledge user, which resulted in PPIQ (Version 2). Redundant items were removed from the PPIQ resulting in a reduction from 84 items to 72 items, and four open ended questions which were not present in the initial draft (TABLE 3) were added.User Feedback (Online Survey)
Given the low attendance during the first stakeholder feedback session via the focus group, we conducted an online user feedback data collection to gain additional feedback on the revised PPIQ (Version 2). A group of 14 individuals were contacted for the second stakeholder feedback session. Four participants (a committee member, a patient group representative, an industry employee, and a knowledge user partner representative) responded to the online survey. Participants identified missing questions including items around compensation and “adequacy of training on HTA [health technology assessment], including the clinical and economic reviews” (Participant 2). Results from the online survey and electronic feedback were reviewed and used by the research team and knowledge user (CADTH) resulting in a revised PPIQ (Version 3).
Phase 2: Sensibility Testing
Sensibility Questionnaire
A total of 115 emails were sent out, of which 99 were delivered to stakeholders (35 committee members, 24 patient groups representatives and 40 academic experts) requesting their participation in the sensibility testing and 21 participated. The average age of participants was 50.9 years (standard deviation ± 12.3), and the majority identified as male (n=13, 62%). Six of the participants were members of formulary funding recommendation committees, of whom five were professional members and one was a public member. Eight participants were patient group representatives, and four were academic experts in the field of patient and public engagement. Three participants remained anonymous.
Twenty-one participants reviewed the PPIQ and completed the sensibility questionnaire. The five analyzed categories, including purpose and framework, overt format, face validity, content validity, and ease of use, were reflected by 9 items in the sensibility questionnaire. TABLE 3 provides a summary of responses by all participants. The median scores on the sensibility questionnaire were >5 (out of 7) for all 9 items (100%) reaching our criteria for sensibility. Three participants were anonymous and did not provide their role on committees. Only one public member completed the sensibility testing. Overall, the item with the lowest score (5 out of 7) was the participants’ perception of the time to complete the PPIQ (TABLE 4).
Sensibility Interviews
Of the 21 participants who completed the sensibility questionnaire, 14 agreed to a follow-up sensibility interview (three academic experts, four committee members from formulary funding recommendation committees in Canada, and seven patient group representatives). Interview data were coded according to Feinstein’s five criteria. During the analysis and coding an additional theme emerged: “clear terminology”.
Overall purpose and framework of the PPIQ: Participants thought the PPIQ addressed the overall purpose, to evaluate patient and public involvement on committees, was appropriate as one participant stated: “I felt that the survey was actually pretty well put together” (INT 7).While most participants thought the PPIQ addressed the overall purpose, many wanted the questionnaire to identify the purpose up front in order for the participant to understand the reason for completing the survey: “Knowing the purpose of why and then circling back and showing them the outcomes, I think those are the most important pieces.” (INT 8).
Overt format: Participants highlighted the need to include a progress bar, which identifies how far along one is in the survey: “I find that useful. It’s just a bit of telling people that you’re making progress” (INT 4). Another participant suggested that the PPIQ have a start stop option to aid in the ease of its use: “Some people just for various reasons might not be able to sit down and do it within 20 minutes. So I would absolutely have the option that you can stop and start” (INT 11). Face validity: Most participants thought that the PPIQ was clear. As one participant said, “I think the questions are … [written] in very easy to understand language. And I think the way they’re written, they would resonate with people” (INT 10). Another participant stated, “I don’t recall any particularly confusing [items]” (INT 5). Participants reported that the content in the PPIQ was comprehensive:
“I thought that it was really thorough. And I thought that it did a really good job asking about people’s thoughts on the public and patient involvement process from many different perspectives” (INT 3).
While participants agreed the PPIQ was comprehensive, they noted that this resulted in a lengthy questionnaire.
Content validity: Participants did not identify any missing items, though some redundant items were noted. There was discussion around “whether public and patient, consistently need to be pulled apart” (INT 3) across the items. Another participant thought “the questions about the chair of the committee ensuring that these perspectives are considered. It’s a little bit redundant … do we really care that it’s the chair that makes it happen or it’s just part of the process?” (INT 5). Finally, one participant “wondered if maybe it [some items] could be consolidated” (INT 8). Participants also discussed reducing the number of response options.
Ease of use: Participants indicated that overall the PPIQ was easy to use. As one participant noted, “I think it flowed well. Like it didn’t feel like it was leading me anywhere. Which is important, right? It was clear and logical” (INT 8). A concern that they raised was the literacy level of the questionnaire: “I mean I think that anybody with below a high school education would have trouble with this questionnaire” (INT 4). However, another participant said “the language is also reflective of or applicable to who might be filling it out” (INT 11). One participant highlighted that each question was needed: “I understand you’d like to … shorten it a little bit. But truthfully, there’s probably not a lot I would eliminate. You know, even just scanning over it again, I mean I think they all ask different things” (INT 2).
Clear terminology: Some participants thought that further clarification of terms and definitions used within the survey was required. Participants highlighted the importance of providing a clear definition for the term, “industry”. For example, “And the other one too just in terms of language, you referenced industry in this as well. I mean I interpret industry … as … pharma. That needs to be spelled out or just defined maybe at the outset” (INT 11). Another participant said that “I remember feeling that sometimes it was difficult to sort of differentiate … [between] patient versus public” (INT 5).
Based on the results of the sensibility testing combined with research team feedback, we revised the PPIQ (Version 4) prior to pilot testing. PPIQ (Version 4) was informed by the Phase 2 sensibility testing, prompting us to improve the face validity and the purpose and context of the PPIQ through more clearly defining its purpose in the introduction of the questionnaire. Also, we more clearly articulated the distinction between public and patient stakeholders throughout the questionnaire. To improve ease of use, we reduced the response options on each question from seven to five options, and because the PPIQ is designed to be administered electronically, we included a status bar to illustrate progress to improve user experience and the perceived length of the PPIQ.
Phase 3: Pilot Testing
A total of 55 committee members across Canada were contacted by email to complete the PPIQ (Version 4) of which 15 (27%) opened the PPIQ, and 14 (25%) participants completed the PPIQ. The sample consisted of professional drug committee members: physician (n=7), pharmacist (n=4) and academic/researcher (n=3). The average age of participants was 52.4 years (SD ± 12.3 years), with eight (57%) participants identifying as male. The average time for participants to complete the PPIQ was 19:00 minutes (SD ± 13:46). TABLE 5 summarizes the pilot data within each of the nine earlier established evaluation criteria (Rosenberg-Yunger & Bayoumi, 2014). For seven of the nine evaluation criteria, the majority of participants (>50%) agreed (inclusive of ‘agree’ and ‘strongly agree’) that their respective committee satisfied the criteria based on the items in the PPIQ (TABLE 5). Approximately 10% of all participants’ responses were ‘not applicable’ for PPIQ items. Three items were identified as having over one third of the total sample answering ‘not applicable’: Question 18, “Patient member(s) adequately represent patient perspectives during our committee deliberation” (71%, n=10, ‘not applicable’); Question 19, “Public member(s) adequately represent public perspectives during committee deliberation” (43%, n=6, ‘not applicable’), and Q53, “Our patient group submission guidelines are easy to find on our website” (43%, n=6, ‘not applicable’).
Discussion
In this study, we used a multi-methods multi-phased approach to develop and refine a questionnaire to evaluate patient and public involvement in committees making formulary priority setting recommendations (The Patient and Public Involvement Questionnaire, 2019). The purpose of the PPIQ is to evaluate the level of ‘successful’ patient and public involvement based on nine criteria which allow a dynamic evaluation of public and patient involvement across different drug recommendation committee contexts (Rosenberg-Yunger & Bayoumi, 2017). Thus the PPIQ can generate evidence currently lacking in the area of patient and public involvement which can be used to create evidence-based policies.
By eliciting stakeholder feedback, we were able to develop a PPIQ that had face and content validity and that met the expectations of potential users, including patient group representatives, industry, and patient and public committee members. Based on participants’ responses to the sensibility questionnaire and on interview data, the content was appropriate, and the items were necessary and contributed to adequately evaluating public and patient involvement on formulary recommendation committees. Our pilot indicates the feasibility of delivering the PPIQ electronically (based on response rate and time to complete), and demonstrates the applicability of questions across multiple committees. While the sample of the pilot was small, the method for summarizing items within criteria may be useful for future implementations, such as testing reliability or construct validity of the PPIQ with formulary committees in Canada.
The sensibility interview results indicate that the practicality of implementing the PPIQ among different committees may vary based on the number and type of members, as well as available time and resources and competing committee priorities. Overall participants involved in the sensibility testing reported that the PPIQ was easy to use; however, this criterion had the lowest median score, specifically related to its length (median score of 5/7). The sensibility interview data suggested the length may be a limitation when implementing the PPIQ in the ‘real world’. These pilot data suggested that on average it took participants approximately 20 minutes to complete the PPIQ, which may or may not be appropriate for committee members with competing priorities. Nonetheless, participants acknowledged the complexity of patient and public involvement in formulary committees, and in turn understood and supported the need for the PPIQ to be robust and comprehensive.
The PPIQ is one of few questionnaires to measure public and patient involvement within formulary recommendations for public funding. Another questionnaire that may be used to evaluate public and patient involvement in our context is the Public and Patient Engagement Evaluation questionnaire (PPEET) (Abelson et al., 2016). The PPEET was developed in 2011 to measure engagement broadly, allowing the questionnaire to be used in a variety of contexts in Canada with the objective of improving the quality of public and patient engagement. In contrast, the PPIQ focuses specifically on formulary committees, resulting in an evaluation of member involvement that is specific to the context of formulary committees in Canada.
Some limitations are present in our approach. Throughout our user testing (phase 1.2, 1.3) we had low participant numbers; however, this phase was also informed by the key informant interviews and literature review conducted previously by AMB and ZRY. In our sensibility testing, we had only one public member and no patient committee members, but we sought to generate patient and public perspectives through representation from multiple patient groups. Only professional committee members (i.e., physicians, pharmacists, researchers, etc.) participated in the pilot test of the PPIQ, which highlights the need for further testing of the PPIQ with public and patient committee members. While this paper describes aspects of questionnaire development, further work is needed to identify the reliability and validity of the PPIQ in measuring patient and public engagement within drug committee settings. Despite these limitations, the PPIQ was developed across multiple phases, with opportunity for a breadth of perspectives from across stakeholder groups.
Conclusions
To ensure that policies around patient and public involvement are evidence-based it is critical that evaluations are conducted to generate appropriate evidence. The PPIQ is one tool that can do this in committees making formulary priority setting recommendations in Canada. This questionnaire may have applicability to other geographic contexts with similar priority setting healthcare structures (Rosenberg-Yunger et al., 2011).
References
Abelson, J., Li, K., Wilson, G., Shields, K., Schneider, C., & Boesveld, S. (2016). Supporting quality public and patient engagement in health system organizations: Development and usability testing of the Public and Patient Engagement Evaluation Tool. Health Expectations, 19(4), 817–827. https://doi.org/10.1111/hex.12378
Berglas, S., Jutai, L., Mackean, G., & Weeks, L. (2016). Patients’ perspectives can be integrated in health technology assessments: An exploratory analysis of CADTH common drug review. Research Involvement and Engagement, 2(1), 21. https://doi.org/10.1186/s40900-016-0036-9
Boote, J., Telford, R., & Cooper, C. (2002). Consumer involvement in health research: A review and research agenda. Health Policy, 61(2), 213–236. https://doi.org/10.1016/S0168-8510(01)00214-7
Bruni, R. A., Laupacis, A., & Martin, D. K. (2008, July). Public engagement in setting priorities in health care. CMAJ. 179 (1) 15-18; DOI: https://doi.org/10.1503/cmaj.071656
Conklin, A., Morris, Z., & Nolte, E. (2015). What is the evidence base for public involvement in health-care policy? Results of a systematic scoping review. Health Expectations, 18(2), 153–165. https://doi.org/10.1111/hex.12038
Denzin, N. K., & Lincoln, Y. S. (2005). The Discipline and Practice of Qualitative Research. n N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (p. 1–32). Sage Publications Ltd.
Dipankui, M. T., Gagnon, M. P., Desmartis, M., Légaré, F., Piron, F., Gagnon, J., … Coulombe, M. (2015). Evaluation of patient involvement in a health technology assessment. International Journal of Technology Assessment in Health Care, 31(3), 166–170. https://doi.org/10.1017/S0266462315000240
Feinstein, A. R. (1987a). The intellectual crisis in clinical science: medaled models and muddled mettle. Perspectives in Biology and Medicine, 30(2), 215–230. https://doi.org/10.1353/pbm.1987.0047
Feinstein, A. R. (1987b). Clinimetrics. New Haven; London: Yale University Press. doi: 10.2307/j.ctt1xp3vbc.
Gauvin, F. P., Abelson, J., Giacomini, M., Eyles, J., & Lavis, J. N. (2011). Moving cautiously: Public involvement and the health technology assessment community. International Journal of Technology Assessment in Health Care, 27(1), 43–49. https://doi.org/10.1017/S0266462310001200
Malekinejad, M., Horvath, H., Snyder, H., & Brindis, C. D. (2018). The discordance between evidence and health policy in the United States: The science of translational research and the critical role of diverse stakeholders. Health Research Policy and Systems, 16(1), 1–21. https://doi.org/10.1186/s12961-018-0336-7
O’Brien, K. K., Bayoumi, A. M., Bereket, T., Swinton, M., Alexander, R., King, K., & Solomon, P. (2013). Sensibility assessment of the HIV Disability Questionnaire. Disability and Rehabilitation, 35(7), 566–577. https://doi.org/10.3109/09638288.2012.702848
Pivik, J., Rode, E., & Ward, C. (2004). A consumer involvement model for health technology assessment in Canada. Health Policy, 69(2), 253–268. https://doi.org/10.1016/j.healthpol.2003.12.012
QSR International. (2012). NVivo 10 Research Software. 2014.
Rosenberg-Yunger, Z. R. S., & Bayoumi, A. M. (2017). Evaluation criteria of patient and public involvement in resource allocation decisions: A literature review and qualitative study. International Journal of Technology Assessment in Health Care. 33(2), 270-278. https://doi.org/10.1017/S0266462317000307
Rosenberg-Yunger, Z. R. S., Daar, A. S., Thorsteinsdóttir, H., & Martin, D. K. (2011). Priority setting for orphan drugs: An international comparison. Health Policy. 100(1), 25–34. https://doi.org/10.1016/j.healthpol.2010.09.008
Rosenberg-Yunger, Z. R. S., Thorsteinsdóttir, H., Daar, A. S., & Martin, D. K. (2012). Stakeholder involvement in expensive drug recommendation decisions: An international perspective. Health Policy. 105(2–3), 226–235. https://doi.org/10.1016/j.healthpol.2011.12.002
Rowe, B. H., & Oxman, A. D. (1993). An assessment of the sensibility of a quality-of-life instrument. American Journal of Emergency Medicine, 11(4), 374–380. https://doi.org/10.1016/0735-6757(93)90171-7
R version 3.2.0. (2019). https://www.r-project.org/
The Patient and Public Involvement Questionnaire. (2019). https://www.ryerson.ca/ppiq/copyright-funding/
Tunis, S. R. (2013). Lack of evidence for clinical and health policy decisions. BMJ (Online). British Medical Journal Publishing Group. https://doi.org/10.1136/bmj.f7155
van de Goor, I., Hämäläinen, R. M., Syed, A., Juel Lau, C., Sandu, P., Spitters, H., … Aro, A. R. (2017). Determinants of evidence use in public health policy making: Results from a study across six EU countries. Health Policy, 121(3), 273–281. https://doi.org/10.1016/j.healthpol.2017.01.003