The TOOLBOX© presented can be compared with existing work conducted with a rather similar methodology; we will use two for our discussion.
First, we will use the Spencer, Ritchie, Lewis & Dillon (2003) guideline, targeting all social and human science research. It stems from a debate and consensus on the relevance of existing criteria and/or guidelines among a group of experts. This methodology corresponds to the first steps of ours. A third step of their work consists in testing their guideline on 8 documents with two independent evaluators. They, however, do not test the existing guidelines in a real situation with users to systematically evaluate the validity of their use and the ease with which the criteria deemed most important are understood and applied (Spencer et al., 2003, p.108).
Next, the work completed by Pope and Mays (2006) also resembles ours through its target population, the health science field. Pope and Mays adopt the criteria of Spencer et al., while adding other criteria that allow for the specificities of qualitative research to be better evaluated. These authors deem that qualitative research must be evaluated “with reference to the same broad criteria of quality as quantitative, although the meaning attributed to these criteria may not be the same and may be assessed differently” (Pope & Mays, 2006, p.83). Also, to our knowledge, they do not test the use validity of the criteria proposed and the level of agreement among users regarding the definition of the criteria proposed.
Nevertheless, as shown in table 2 below, the guidelines proposed by Spencer et al., Pope and Mays and our toolbox© present interesting similarities. They could constitute a first basis for agreement between expert users and peer users regarding the essential criteria to evaluate qualitative research in the health science field. These results are rather solid as they stem from the agreement of the 34 experts of the Spencer et al. research, work conducted by Pope and Mays for over twenty years and our research, which includes 56 participants from different countries.
Table 2: Comparison between the definition and characteristics of the criteria identified in the three guides (click to enlarge)
These three sets of criteria remain very close to one another, with a few variations. This is a first major result and deserves to be pointed out. Despite their variations in terms of their formulation, the criteria remain close, both in terms of the research organization underlying them and their number and/or overall definition. Although the proximity of this result must be linked on the one hand to the proximity of the methodology (Spencer et al., 2003; Santiago-Delefosse, Gavin, Bruchez, Roux & Stephen, 2014) and on the other, to the proximity of the disciplinary fields (Pope & Mays, 2006; Santiago-Delefosse et al., 2014), it is representative of an agreement between some hundred qualitative research experts. This result shows the progressive implementation of consensus between the criteria guidelines, and thus a certain stabilization of questioning on the quality criteria for qualitative research. This “stabilization” can then be subject to consensual communication with journal editors and/or organizations assessing research work (scientific and/or ethical).
A second research result also shows the need to focus more on the criteria definitions. Indeed, there is less consensus on their specific definitions or the weight that each should be given than on their number and overall content.