Using software in qualitative research lewin




















Skip to search form Skip to main content Skip to account menu You are currently offline. Some features of the site may not work correctly. DOI: Lewins , C. View via Publisher. Save to Library Save. Create Alert Alert. Share This Paper. However, they could equally indicate that the phenomenon that is the focus of the review is not prevalent in a given context.

Qualitative review findings are developed by identifying patterns in the data across the primary studies included in an evidence synthesis. The coherence of the review finding addresses the question of whether the finding is well grounded in data from these primary studies and provides a convincing explanation for the patterns found in these data. Coherence in the data contributing to a review finding may be contextual, where patterns are found across studies that are similar to each other with respect to population, interventions, or settings; or conceptual, where patterns in the data from the underlying studies can be explained in relation to new or existing theory.

Patterns need to be explained and supported through data presented in the primary studies or through hypotheses developed by the primary study authors or the review authors. Review findings are sometimes challenged by outlying, contrasting, or even disconfirming data from the primary studies that do not support or that directly challenge the main finding. Review authors should look actively for such data that complicate or challenge their main findings [ 28 ] and attempt to explain these variations or exceptions.

When there is no convincing explanation for these variations or exceptions, we are less confident that the review finding reflects the phenomenon of interest. Guidance on what constitutes a convincing explanation needs further development.

Confidence in a review finding may be lower when there is an unexplained lack of coherence. When theories or explanations are used to explain similarities or variations, review authors should specify whether the theory or explanation is internally generated i.

Reasons why it may be difficult to explain the variation in data across individual studies contributing to a finding include that the available data are too thin [ 29 ], outlying or disconfirming cases are not well explored, the review authors do not know the field sufficiently well to generate an explanation, the theory used to inform the review is incomplete or flawed, or the study sampling for the review was limited.

Examining the coherence of the review findings gives review authors an opportunity to reflect on the extent to which the pattern captured in the review finding really is contextually or conceptually coherent. It also gives review authors an opportunity to offer a convincing explanation for the patterns they have found and to note the presence of disconfirming cases. Concerns regarding the coherence of a review finding can have several implications: firstly, review authors should consider using the patterns found across primary studies to generate new hypotheses or theory regarding the issue addressed by the finding.

Secondly, a lack of coherence in relation to a particular review finding may suggest that more primary research needs to be done in that area and that the review should perhaps be updated once those data are available. Finally, when a review has used a sampling procedure to select studies for inclusion in the review [ 30 ], future updates of the review could reconfigure the sampling to explore the variation found. Adequacy of data is an overall determination of the degree of richness and quantity of data supporting a review finding.

In contrast, thin data do not provide enough detail to develop an understanding of the phenomenon of interest. In addition to data richness, quantity of data is also important.

When a review finding is supported by data from only one or few primary studies, participants, or observations, we are less confident that the review finding reflects the phenomenon of interest. This is because when only a few studies or only small studies exist or when few are sampled, we do not know whether studies undertaken in other settings or groups would have reported similar findings.

Confidence in a review finding may be lower when there are concerns regarding whether there are adequate amounts of data contributing to a review finding. This could include concerns about the richness of the data or the number of studies, participants, or observations from which the data are drawn. Review authors need to judge adequacy in relation to the claims made in a specific review finding.

There are therefore no fixed rules on what constitutes sufficiently rich data or an adequate quantity of data. When considering whether there are adequate data, review authors may find the principle of saturation of data useful or could consider the extent to which additional data are likely to change the finding [ 31 — 34 ].

Review authors should also look for disconfirming cases. More work is needed on how to apply these strategies in the context of a qualitative evidence synthesis. When adequacy of data is not achieved, this may suggest that more primary research needs to be done in relation to the issue discussed in the review finding and that the review should be updated once that research is available. Inadequate data may indicate that the review question was too narrow and that future syntheses should consider a broader scope or include primary studies that examine phenomena that are similar, but not identical, to that under consideration in the synthesis.

This, in turn, might have implications for assessment of relevance. As noted earlier, our confidence in the evidence is an assessment of the extent to which the review finding is a reasonable representation of the phenomenon of interest S1 Table. This assessment is based on the judgements made for each of the four CERQual components. To indicate our assessment of confidence, we propose four levels: high, moderate, low, or very low. This is a similar approach to that used in the GRADE tool for assessing confidence in the evidence on the effectiveness of interventions [ 35 ].

Confidence should be assessed for each review finding individually and not for the review as a whole. Future papers will describe in more detail for each CERQual component the circumstances under which confidence in a review finding should be rated down.

The assessment of confidence for a review finding is a judgement, and it is therefore particularly important to include an explanation of how this judgement was made. This is discussed further below. Those assessing confidence in review findings should specify as far as possible how future studies could address the concerns identified.

A summary of qualitative findings table can be used to summarise the key findings from a qualitative evidence synthesis and the confidence in the evidence for each of these findings, as assessed using the CERQual approach. The table should also provide an explanation of the CERQual assessments. An example of a summary of qualitative findings table is provided in Table 4. There are several advantages to providing a succinct summary of each review finding and an explanation of the CERQual assessment for that finding.

Firstly, this may encourage review authors to consider carefully what constitutes a finding in the context of their review and to express these findings clearly Box 1. Secondly, these tables may facilitate the uptake of qualitative evidence synthesis findings into decision making processes, for example, through evidence-to-decision frameworks [ 13 ].

Thirdly, these tables help to ensure that the judgements underlying CERQual assessments are as transparent as possible. The first version of the CERQual approach has been applied in five reviews [ 9 , 16 — 19 ], three of which were used by WHO as the basis for the development of a global guideline [ 14 ].

The current version of CERQual has been used in one published review [ 36 ] and is currently being used in a further ten reviews, at least half of which are being produced to support WHO guidance. This experience has highlighted a number of factors that review authors should consider when applying CERQual to review findings, and we discuss these factors below. To date, the application of CERQual to each review finding has been through discussions among at least two review authors.

This seems preferable to use by a single reviewer as it offers an opportunity to discuss judgements and may assist review authors in clearly describing the rationale behind each assessment. In addition, multiple reviewers from different disciplinary backgrounds may offer alternative interpretations of confidence—an approach that has also been suggested to enhance data synthesis itself [ 28 ].

The approach is intended to be applied by review authors with experience in both primary qualitative research and qualitative evidence synthesis. Assessments of each CERQual component are based on judgements by the review authors, and these judgements need to be described clearly and in detail. Providing a justification for each assessment, preferably in a summary of qualitative findings table, is important for the end user, as this shows how the final assessment was reached and increases the transparency of the process.

Further, when end users are seeking evidence for a question that differs slightly from the original review question, they are able to see clearly how the assessment of confidence has been made and to adjust their own confidence in the review finding accordingly. When making judgements using the CERQual approach, review authors need to be aware of the interactions between the four components. At this stage, CERQual gives equal weight to each component, as we view the components as equally important.

Further research is needed on whether equal weighting is appropriate and on areas in which there may be overlap between components.

Our experience applying the CERQual approach so far has indicated that it is easiest to begin with an assessment of methodological limitations. Thereafter, it does not seem to be important in which order the other three components are assessed, as the process is iterative. It is probably most appropriate for review authors to apply the CERQual approach to their own review, given that prior familiarity with the evidence is needed in order to make reasonable judgements concerning methodological limitations, coherence, relevance, and adequacy of data.

However, in principle the approach could be applied to review findings from well-conducted reviews by people other than the review authors. Guidance for this will be developed in the future. Qualitative research encompasses a wide range of study designs, and there are multiple tools and approaches for assessing the strengths and weaknesses of qualitative studies [ 26 , 27 , 37 — 40 ].

It is currently not possible to recommend a widely agreed upon, simple, and easy to use set of criteria for assessing methodological limitations for the many types of qualitative studies, and this may not be desirable given continued debates regarding different approaches and our desire for the CERQual approach to be used by the range of qualitative researchers involved in evidence synthesis.

In the application of CERQual to date, relevance has been assessed by review authors and not by users, such as decision makers and those who support them or consumer groups. There may be instances in which such users would like to use review findings from a relevant synthesis, but their context differs to some extent from that specified in the review question. Transparent reporting of the assessment of relevance by the review authors provides these users with a starting point from which to understand the reasons behind the assessment.

However, it may be difficult for users who are not familiar with the primary studies to assess the relevance to their own context. However, it is not the intention of CERQual to reduce variation within review findings. Identifying both similarities and differences in the primary data, including accounting for disconfirming cases, is an important part of developing review findings.

Review authors should not attempt to create findings that appear more coherent through ignoring or minimising important disconfirming cases. Moreover, users of qualitative evidence syntheses are often specifically interested in where a review finding is not relevant or applicable, so as to avoid implementing interventions or guidelines that may be inappropriate or not feasible in their specific context.

While numbers can be important and useful in qualitative research, qualitative analysis generally focuses on text-based data [ 42 ]. In addition, fewer, more conceptually rich studies contributing to a finding may be more powerful than a larger number of thin, descriptive studies.

CERQual provides users of evidence with a systematic and transparent assessment of how much confidence can be placed in individual review findings from syntheses of qualitative evidence. In addition, the use of CERQual could help review authors to consider, analyse, and report review findings in a more useful and usable way.

The CERQual approach offers review authors a further opportunity for a more structured approach to analysing data. It guides them through a process of examining and appraising the methodological limitations, relevance, coherence, and adequacy of the data contributing to a review finding.

The development of CERQual has identified a number of important research questions, and these are summarised in Box 4. CERQual is a work in progress, and the following steps are planned to further develop the approach:. We take the standpoint, however, that ways of appraising both primary and secondary qualitative research are needed.

Such approaches need to be appropriate to, and take into account the diversity of, qualitative methods [ 27 , 37 , 39 ]. As noted above, users of both primary qualitative research findings and qualitative evidence synthesis findings routinely make these judgements when reading and using these types of research.

However, the judgements made by these users are implicit, which makes it difficult for others to understand and critique them—an important limitation when findings from such research are then used to inform decisions about health and social policies. CERQual attempts to make assessments of confidence in the evidence more systematic and transparent while accepting that these assessments are judgements that are likely to vary across assessors.

An intended consequence of the CERQual approach is to improve methodological quality and reporting standards in primary qualitative research. For an adequate CERQual assessment to be made, the authors of primary studies need to provide sufficient information about the methods they have used. Wide use of CERQual may thus encourage more thorough reporting of qualitative research methods.

This is an informal collaboration of people with an interest in how to assess confidence in evidence from qualitative evidence syntheses and is a subgroup of the GRADE Working Group. We would encourage those with an interest in this area to join the group and contribute to the development of the CERQual approach. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited Funding: This work was supported by funding from the Department of Reproductive Health and Research, WHO www.

Summary Points Qualitative evidence syntheses are increasingly used, but methods to assess how much confidence to place in synthesis findings are poorly developed. The Confidence in the Evidence from Reviews of Qualitative research CERQual approach helps assess how much confidence to place in findings from a qualitative evidence synthesis. CERQual provides a transparent method for assessing confidence in qualitative evidence syntheses findings. Introduction The systematic use of research evidence to inform health and social policies is becoming more common among governments, international organisations, and other health institutions, and systematic reviews of intervention effectiveness are now used frequently to inform policy decisions.

Box 1. What Is a Review Finding? Skip to main content. Due to global supply chain disruptions, we recommend ordering print titles early. Resources to help you teach online See our resources page for information, support and best practices.

Download flyer. The book will help you to choose the most appropriate package for your needs and get the most out of the software once you are using it.

This book considers a wide range of tasks and processes in the data management and analysis process, and shows how software can help you at each stage.

In the new edition, the authors present three case studies with different forms of data text, video and mixed data and show how each step in the analysis process for each project could be supported by software. The new edition is accompanied by an extensive companion website with step-by-step instructions produced by the software developers themselves. Reading this book is like having Ann and Christina at your shoulder as you analyse your data!

Mount Saint Vincent University. Paul K. University of Cumbria. Stanislava Yordanova Stoyanova. Forum: Qualitative Social Research. Mrs Michele Fuller. Department of Education, Colchester Institute. Report this review. Dr Krzysztof Dziekonski.

Mr Kenneth Anthony Wilkinson. Mr Philip Russell. Department of Humanities, Tallaght Institute of Technology.



0コメント

  • 1000 / 1000