Results Phase IV

Among the numerous results, we chose to present those related to open-ended question No. 24 of our questionnaire: “List criteria present in the guideline that appear to be most important for your work as an expert”. Indeed, we found that the answers to this question would help us the most in identifying the consensual criteria among the various respondents. These criteria relate directly to the use of these guides in real situations, and no longer to theoretical ideals or to one single expert judgment. They help to build some consensus among qualitative research users.

Common criteria to evaluate qualitative research: consensus or sham?

Our thematic content analysis of the answers to question 24 (r = 146) allowed us to draw a first list of 12 consensual criteria for the SAPs. They are presented according to the logic of a traditional research plan.

    • 12 “important” criteria in the evaluation of qualitative research (r = 146)

tableau criteres importants

This list of 12 criteria highlights a certain agreement among expert users; however, this observation must be examined with caution. Indeed, this list does not provide information on the level of importance that each health science field gives to these criteria or their underlying definitions/concepts.

A toolbox of consensual criteria

Based on our results, we suggest a TOOLBOX© of criteria deemed consensual for the various health science fields examined, while taking into account the diversity of the characteristics of each criterion according to these fields. Discrepancies, sometimes major ones, mark the descriptions of the criteria according to the groups. These characteristics were summed up by the four SAPs, to come to a more consensual, accurate and concrete definition for each criterion. This work helped to balance/weigh some specific criteria according to the health field concerned. Consequently, the TOOLBOX proposed then seems to be based both on tests conducted on the criteria in real situations (guide evaluation) and on an inter-judge agreement between peer expert users of qualitative research.