Checklist for coherent and quality thematic analysis
Although I am not an editor or reviewer I think this checklist provides challenging questions that I can use to ensure clarity of thought and argument for my research philosophy, approach and chosen data analysis.
Guidelines for reviewers and editors evaluating thematic
analysis manuscripts
Produced by Victoria Clarke and Virginia Braun (2019)
Available:
https://cdn.auckland.ac.nz/assets/psych/about/ourresearch/documents/TA%20website%20update%2010.8.17%20review%20checklist.pdf
We regularly encounter published TA studies where there are
mismatches between various
elements of the report and practice. We have developed the
following checklist for editors and
reviewers, to facilitate the publication of coherent and
quality thematic analysis – of all forms. The
checklist is split between conceptual and methodological
discussion/practice and analytic output.
Evaluating the methods and methodology
1. Is the use of TA explained (even if only briefly)?
2. Do the authors clearly specify and justify which type of
TA they are using?
3. Is the use and justification of the specific type of TA
consistent with the research questions or
aims?
4. Is there a good ‘fit’ between the theoretical and
conceptual underpinnings of the research and
the specific type of TA (conceptual coherence)?
5. Is there a good ‘fit’ between the methods of data
collection and the specific type of TA?
6. Is the specified type of TA consistently enacted
throughout the paper?
7. Is there evidence of problematic assumptions about TA?
These commonly include:
• Treating TA as one, homogenous, entity, with one set of –
widely agreed on – procedures.
• Assuming grounded theory concepts and procedures (e.g.
saturation, constant comparative
analysis, line-by-line coding) apply to TA without any
explanation or justification.
• Assuming TA is essentialist or realist, or atheoretical.
• Assuming TA is only a data reduction or descriptive
approach and thus has to be
supplemented with other methods and procedures to achieve
other ends.
8. Are any supplementary procedures or methods justified and
necessary or could the same results
have been achieved simply by using TA more effectively?
9. Are the theoretical underpinnings of the use of TA
clearly specified (e.g. ontological,
epistemological assumptions, guiding theoretical
framework(s)), even when using TA inductively
(inductive TA does not equate to analysis in a theoretical
vacuum)?
10. Do the researchers strive to ‘own their perspectives’
(even if only very briefly); their personal
and social standpoint and positioning? (This is especially
important when the researchers are
engaged in social justice-oriented research and when
representing the ‘voices’ of marginal and
vulnerable groups, and groups to which the researcher does
not belong.)
11. Are the analytic procedures used clearly outlined?
12. Is there evidence of conceptual and procedural
confusion? For example, reflexive TA (Braun &
Clarke, 2006) is the claimed approach but different
procedures are outlined such as the use of a
codebook or coding frame, multiple independent coders and
consensus coding, inter-rater
reliability measures, and/or themes are conceptualised as
analytic inputs rather than outputs
and therefore the analysis progresses from theme
identification to coding (rather than coding to
theme development).
13. Have the authors fully understood their claimed approach
to TA?
Evaluating the analysis
14. Is it clear what and where the themes are in the report?
Would the manuscript benefit from
some kind of overview of the analysis: listing of themes,
narrative overview, table of themes,
thematic map?
15. Are themes reported domain summaries rather than fully
realised themes?
• Have the data collection questions been used as themes?
• Are domain summaries appropriate to the purpose of the
research? (If so, if the authors are
using reflexive TA, is this modification in the
conceptualisation of themes explained and
justified?)
• Would the manuscript benefit from further analysis being undertaken
and the reporting of
fully realised themes?
• Or, if the authors are claiming to use reflexive TA, would
the manuscript benefit from
claiming to use a different type of TA (e.g. coding
reliability or codebook)?
16. Is a non-thematic contextualising information presented
as a theme? (e.g. the first theme is a
domain summary providing contextualising information, but
the rest of the themes reported are
fully realised themes) Would the manuscript benefit from
this being presented as non-thematic
contextualising information?
17. In applied research, do the reported themes give rise to
actionable outcomes?
18. Are there conceptual clashes and confusion in the paper?
(e.g. claiming a social constructionist
approach while also expressing concern for positivist
notions of coding reliability, or claiming a
constructionist approach while treating participants’
language as a transparent reflection of their
experiences and behaviours)
19. Is there evidence of weak or unconvincing analysis?
• Too many or two few themes?
• Too many theme levels?
• Confusion between codes and themes?
• Mismatch between data extracts and analytic claims?
• Too few or too many data extracts?
• Overlap between themes?
20. Do authors make problematic statements about the lack of
generalisability of their results, and
implicitly conceptualise generalisability as
statistical-generalisability?
Comments
Post a Comment