Agreement Qualitative

In the field of qualitative research, the replication of thematic analysis methods can be a challenge, as many articles leave a detailed overview of the qualitative process; This makes it difficult for an inexperienced researcher to effectively reflect analysis strategies and processes, and for experienced researchers to fully understand the rigor of the study. Although there are descriptions of the evolution of the code book in the literature [2, 3], there remains an important debate on what constitutes reliability and rigour in qualitative coding [1]. Indeed, the idea of demonstrating rigour and reliability is often overlooked or briefly discussed, creating conditions for replication. The “percentage” column shows the percentage of the agreement relative to the code in question. The line used to calculate the average agreement percentage – in the example, it is 94.44%. The criterion is the frequency of the occurrence of the code in the document; Specifically, the frequency of correspondence (agreement) of the attribution of the code. There are minor differences in the methods. The vector method weights disagreement in a relatively large way, because the denominator contains each code used by a programmer. This option may offer benefits in certain situations, for example. B if the researcher tries to identify disagreements and potential problems in coding techniques.

In addition, this method allows for a lower match when the two coders choose discrepant codes (as shown in line 4 of Table 6 above) and not when a coder selects a subset of the codes from the other code but there are no discrepant codes (as shown in line 3 of Table 6). This method may also be easier to design for infor-maticists trained in the call for information. However, in cases of serious disagreements or where an individual coder tends to use more codes per line than the others, the denominator will be much larger and the agreement statistics will be much smaller. In addition, adding more coders will tend to inflate the denominator in low convergence situations, resulting in a decrease in statistics. Documentary agreement for transcription 1 only with completely coded lines Another factor that probably had an impact on the matching results is the large size of the code family (no.70). For the calculation of “P Chance,” or chance of agreement, MAXQDA uses a proposal by Brennan and Prediger (1981), which is heavily interested in the optimal use of Cohenkappa and its problems of uneven margin distribution. In this calculation, random match is determined by the number of different categories used by the two coders. This is the number of codes in the code-specific result table. The maximum tuning vector (MA) for the i line is similar to a ji-dimensional dimension. However, each element of the vector is the number of possible agreements for this code and therefore d-1, of which d is the number of programmers. This suggests that the maximum agreement on line i would occur if all coders use each of the Ji codes.

Table 1 presents the maximum agreement by (1, 1, 1), which describes the possibility that the two coders have agreed on codes a, b and c.