Article Text

Download PDFPDF

Rapid reviews methods series: guidance on rapid scoping, mapping and evidence and gap map (‘Big Picture Reviews’)
  1. Fiona Campbell1,
  2. Anthea Sutton2,
  3. Danielle Pollock3,
  4. Chantelle Garritty4,
  5. Andrea C Tricco5,
  6. Lena Schmidt1,
  7. Hanan Khalil6
  1. 1Population Health Sciences Institute, Newcastle University, Newcastle Upon tyne, UK
  2. 2ScHARR, The University of Sheffield, Sheffield, UK
  3. 3University of Adelaide, Adelaide, South Australia, Australia
  4. 4Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  5. 5Dalla Lana School of Public Health and Li Ka Shing Knowledge Institute, University of Toronto and St. Michael's Hospital, Unity Health, Toronto, Ontario, Canada
  6. 6La Trobe University School of Psychology and Public Health, Melbourne, Victoria, Australia
  1. Correspondence to Dr Fiona Campbell; Fiona.campbell1{at}ncl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • An increasing number of rapid scoping, mapping reviews and evidence gap maps (‘Big Picture Reviews’ (BPRs)) are being undertaken to address broad research questions and provide an overview of a topic. While there is guidance on rapid review methods, this has not been tailored to the methods used in scoping, mapping and evidence and gap map (BPRs) reviews.

WHAT THIS STUDY ADDS

  • This paper considers how rapid methods might be applied to BPRs and the implications of these for the rigour and value of the research findings.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • This is the first paper to provide guidance for the methods of applying rapid approaches to BPRs. It will inform both researchers and users of the potential and limitations of rapid methods in these types of reviews. It highlights gaps in knowledge, including the implications of rapid methods for the trustworthiness of BPR findings and the need for evaluation of technologies that herald opportunities for greater efficiencies in the production of trustworthy evidence syntheses.

Introduction

This paper is part of a series from the Cochrane Rapid Review Methods Group providing methodological guidance for rapid reviews (RRs). The purpose of this paper is to consider how RR approaches might be applied when the question being addressed requires a broader description of existing knowledge or a big-picture view of the evidence.

‘Big Picture Reviews’ (BPRs) refer to a family of evidence synthesis approaches that seek to describe and map the existing evidence. These approaches can be contrasted to systematic reviews (SRs) of effectiveness, which aim to synthesise homogenous studies to evaluate the effects of a specific intervention.1–5 BPRs differ in purpose, seeking to describe, categorise, catalogue and code the evidence rather than to synthesise (statistically or qualitatively) the findings of included studies.6 They aim to gain and communicate a big-picture view of the evidence available, providing an overview of the topic. Despite their differences in purpose, they share the same requirements of rigour, objectivity, comprehensiveness and transparency in their conduct and reporting, common to all evidence synthesis methods. The growth in the use of these approaches serves to demonstrate their value in research, guiding future research priorities and policy where questions often extend beyond single interventions and outcomes. They are particularly valuable in forming a foundational step in the architecture of evidence, by describing the existing evidence and where research priorities for both secondary (SRs) and primary research exist.7

The term 'Big Picture Reviews' (BPRs) is an umbrella term that covers scoping, mapping and evidence gap maps (EGM) (Figure 1 shows the definition and overall features of each). The terms ‘scoping review’ and ‘mapping review’ are not used consistently within the published literature. At times, the terms appear to be used interchangeably, while in other instances, they refer to quite different approaches. These differences can be explained in part by the different academic traditions from which they have arisen.6 We suggest that there is value in regarding these as different approaches but within the same ‘family’ of approaches. While scoping, mapping and EGMs share the same aims, they also have differences, notably in the depth of data that is extracted and coded, and how findings are displayed and reported. EGMs, for example, include an interactive, visual presentation of the evidence grouping studies into predefined categories.1–4 The creation of a ‘map’ onto which the evidence is plotted makes them a particularly valuable tool for identifying knowledge gaps. 8 9 There is no current guidance on the application of rapid approaches for BPRs, and this paper addresses this gap. It draws on both published evaluations of rapid approaches and recommendations from methodological experts and summarised in Table 1.

Figure 1

Summary of similarities and differences between scoping, mapping and evidence and gap map reviews (‘Big Picture Reviews’).6 *High-level data is data which is readily retrieved and requires no interpretation: country of study, types of outcomes measured, population, in contrast of more complex data requiring more in-depth reading of the included study, such as the quality of the study, the extent to which equity is considered in the study and methods of analysis used.

Table 1

Summary of rapid approaches in Big Picture Reviews

A rapid review (RR) is a ‘type of evidence synthesis that brings together and summarises information from different research studies to produce evidence for people such as the public, healthcare providers, researchers, policymakers and funders in a systematic, resource-efficient manner’.10 This is done by speeding up the ways we plan, do and/or share the results of conventionally structured (systematic) reviews, by simplifying or omitting a variety of methods that should be clearly defined by the authors.10 11

A limitation of the term 'rapid review' is that it fails to indicate which ‘conventional’ review type is referred to. The term does not indicate whether the rapid approaches have been applied to a SR addressing intervention effectiveness or to an alternative evidence synthesis approach. Methodological ‘short-cuts’ used in rapid contexts should be undertaken with reference to the standard guidance. For example, PRISMA-ScR12 guides the reporting standards for scoping reviews and should also serve as a guide to the essential reporting items in a rapid scoping review. Rapid methods are not a ‘one-size-fits-all’ set of approaches, but rather a suite of options that can be tailored to specific review requirements including the time resource available. The implications of those approaches will vary in different types of evidence synthesis, and therefore, it is preferable that authors refer to a ‘rapid scoping review’ or a ‘rapid qualitative synthesis’ to give a clear indication of the reference methods against which the rapid approaches should be compared.

When to conduct a rapid Big Picture Review?

Like SRs of interventions, the length of time needed to undertake a conventional BPR will be influenced by factors that include resources available, the size of the relevant literature, the nature of data being extracted and expertise within the team. Given these caveats, evidence synthesis is likely to take between 6 months and 2 years.13 14 BPRs are as time costly as SRs of interventions and may indeed take longer than an SR.15 To be useful, evidence must be trustworthy,16 ensured by adherence to conventional or ‘gold standard’ approaches found in methodological guidance.1 3 4 However, on occasion, producing evidence within shorter timeframes is also necessary,16 and rapid approaches may enable a review to be completed within 1–4 months.13 14 17 An increase in rapid scoping reviews was seen during the COVID-19 pandemic, precisely in response to pressing clinical need, such as exploring methods of population screening.8 These experiences provided valuable learning in the methods used in RR contexts and highlighted the need for tailoring of methods and clear transparent reporting.18

We recommend that rapid approaches be considered when there is a need for evidence to support a decision that must be made at a time point that would preclude the use of recommended methods or due to resource constraints.19 RRs may produce different results from an SR and may also be limited in the wider applicability of the findings.20 Rapid approaches may, therefore, contribute to research waste, and a clear rationale should be given for their use.21

Limitations of conducting a rapid BPR

The risks when using rapid approaches in BPRs include limiting the reliability and generalisability of findings, potential error and introducing reviewer bias. The selection of rapid approaches is usually a trade-off between time saved and introducing risks to the usefulness and trustworthiness of the review. Few rapid approaches successfully reduce the time needed to conduct the review, without introducing some limitation on the review findings.22 A characteristic of RRs is that they are tailored to adequately answer the specific requirements of the decision-maker commissioning the review (ie, commissioners).23 The context of the urgent or emergent decision needs should inform the methods of the review and delivery times agreed in advance. The trade-offs between time saved and how it will impact the review findings need to be discussed and agreed with commissioners. The Selecting Approaches for Rapid Reviews (STARR) tool can support discussions about rapid approaches that could be adopted.24

The key differences between rapid BPRs and conventional BPRs are summarised in Table 2.

Table 2

‘Big Picture Review’ (BPR) vs rapid ‘BPR’

Preparing the review team and working with knowledge users

Effective teamwork is critical when a review is undertaken within a short timeframe. However, effective teamwork practices are often overlooked in preparing an RR. Familiarisation with good team management will help ensure timely outputs and healthier work environments.25 Administration, project and planning have been shown to be the most time costly component of the review process.15 26 Facilitators to effective team working in the context of a review include daily project meetings to discuss upcoming questions, team members’ physical proximity to allow for ongoing communication, short time lapses between tasks and familiarity with the software tools that will be used.27

In preparing the review team, we would recommend including a reviewer with expertise in BPR methods. Familiarity with the time needed for the review processes and the impact of methodological short cuts can greatly support the delivery of an RR and aid the communication of methods both to commissioners and in the final report.

As well as considering the decision needs of those commissioning the RR, others who are likely to use the knowledge generated through research or be impacted by it should, where possible, be involved (ie, KUs, such as patient partners, healthcare providers, other researchers, funders, the public). KU engagement in research helps to ensure the process itself, and the subsequent outputs are ethical, equitable, impactful, useful and relevant and should also be considered in RR contexts. Guidance exists to inform planning user engagement in scoping reviews,28 which can act as a resource. Meaningful engagement of KUs when working within short timelines does however present challenges.

While examples of rapid BPRs that engage KUs exist, the extent to which engagement occurs and how it impacts the review findings are often not reported.29 Preparatory activities that can support KU engagement even in rapid contexts might include establishing relationships with advocacy and patient representative groups and preparation of educational materials and resources that support engagement. For example, EGMs require KU engagement in developing the framework for the map, and these can be developed in advance of the commissioned RR.4

Setting the review question and topic refinement

BPRs have broad, exploratory and open research questions.30 Rapid contexts are not conducive to the time needed to explore uncertain boundaries in the review question, and it is the breadth of BPR questions that presents one of the greatest challenges in a rapid context.30

In rapid contexts, managing the challenges of broad and potentially ‘fuzzy’ boundaries of the review questions will require commissioners, KUs, reviewers and information specialists to consider carefully how the review question might be limited so that the resulting search yield and included studies are manageable in the time frames available. This will be informed by an understanding of the scale of the evidence, gleaned from preliminary searches and the decision needs of the commissioners and KUs. Getting a sense of the scale of the evidence base from preliminary searches can be aided by searching databases of existing reviews (Epistemonikos,31 Cochrane Library32 and Campbell Collaboration33). The PredicTER tool15 is one that can assist in estimating the time that the review may take and inform decisions regarding necessary rapid approaches and team planning.

Frameworks such as PICO structure the review research questions, form the foundation for guiding the search strategy and define the inclusion and exclusion criteria. BPRs may use alternative frameworks such as the population, concept and context depending on the review question.34 Alternative frameworks (see Table 3) might also be useful in offering further dimensions that could be used to limit the breadth of the review question.35 36

Table 3

Question formulation frameworks

Rapid BPR approaches might also include using additional limits that are clarified during the process of refining the review questions. These may include limits on date of publication, geography, publication type, study design or setting.37 For example, in a rapid scoping review on medical malpractice, the scope was narrowed to the last 10 years and only included evidence published in English.38 Narrowing the breadth of the question may limit the generalisability of the findings. It should be noted that overly stringent inclusion criteria can result in a failure to consider equity, different socioeconomic groups or disadvantaged populations.39 These potential limitations need to be discussed and agreed with KUs and commissioners.

Consistent with guidance for all RRs,11 the preparation of a protocol is vital and can save time by ensuring good agreement of the parameters of the review, and the methods to be used among the review team and commissioners. To increase transparency and consistency, the protocol can be made publicly available and assigned a DOI using free repositories such as Open Science Framework[1] or Harvard Dataverse[2] that will later offer the opportunity to add and share datasets and files from the finished review. A record should be kept in ways in which methods might evolve in response to larger than expected search research results or as the question being addressed is refined during the BPR.

Identifying the evidence

Time frames can be reduced by limiting the numbers of citations that need screening, and this is particularly relevant for BPRs where broad questions often lead to large search yields. The involvement of an information specialist in the scoping process and in preparation of the review protocol will foster informed decision-making about the potential methodological approaches that might be adopted to reduce the number of citations that need screening.40

Identifying evidence may range from a full, exhaustive search (relies on sufficient efficiencies being made elsewhere in the process), to a much more focused search, perhaps employing a limited number of sources and/or abbreviated search strategies. Supplementary search methods beyond database searching tend to be discretionary rather than a mandatory requirement.41 The number of databases searched will be determined by the time and resources available. If dealing with a multi-disciplinary topic, the subject nature of the databases selected is also important, to ensure each discipline is covered within the selected databases. This may mean several databases need to be searched for rapid BPR of a multi-disciplinary nature. The tool ’SRS-Polyglot’ can accelerate the process of converting a PubMed or Ovid Medline search to the correct syntax to be run in other databases.42 43

Employing search limits may seem an easy approach to focusing the literature search. However, such limits should be negotiated with the commissioner and wider review team and should be justifiable with clear methodological or clinical rationale, rather than arbitrary decisions. For example, when limiting by date, is there a previous review from which searches could be updated (eg, run from the last date searched in the previous review)? Is there a key policy change whereby literature from before this date would not be applicable to the current population? Was the intervention made available at a particular time, hence reducing the need to screen references published prior to that date?

Geographical limits may be appropriate, and search filters exist for geographical areas, groupings of countries and multiple individual countries.44 However, many search filters are not validated, and therefore, caution is required when applying unvalidated search filters outside of the context in which they were developed.45 A recent example is a rapid scoping review focused on the impact of interprofessional teams on the panel size in primary care, which, due the extensive scope (>15 000 citations to screen), focused only on high-income countries as per the World Bank criteria.46 47

If applying limitations, a tiered approach to examining the evidence may be appropriate, whereby those studies excluded by the limits are kept, to be examined for any gaps not covered by the initial search results. Typically, in an SR, searching is completed before screening commences. By applying a tiered approach, searching and screening tasks can be run concurrently, which can have a positive impact on time efficiencies in a rapid BPR. Tiered approaches to searching and study selection can be particularly helpful if dealing with a large or diverse evidence base. Liaison with the members of the team responsible for study selection is vital before the searching commences, as the study selection approach may have an impact on the management of search results. This may involve grouping of search results into the relevant tiers. For example, tier one may be any SR evidence. Once tier one has been screened, tier two might be any published reviews since the review searches took place and topics not covered by the review evidence base. Tiered approaches can also work in other contexts, such as initially prioritising populations and settings.

Study selection

Study selection, or screening, is one of the most time consuming stages of the review process, and the time needed will be largely determined by the size of the search yield.15 Screening the results of the searches is a two-stage process, with the first stage comprising an initial title and abstract screen, followed by the retrieval of the studies deemed to be potential includes, and the second stage comprising the full-text screening. Conventionally, these approaches require that screening is undertaken, by two reviewers screening at each stage independently and resolving differences in screening decisions. Large search yields in BPRs mean that the time needed to screen search results is more than double the time needed to screen in a conventional SR of interventions (89 days vs 31 days).15 In addition, the broad scope of BPRs may make screening decisions more difficult with greater discrepancy between reviewers as a result of the less clear question parameters. Screening errors occur least often in reviews with the narrowest defined research question.48 The process of screening is therefore often not only time consuming but challenging as the ‘fuzzy’ boundaries of the review question often require frequent discussion and refinement during the screening process.

We have discussed in the previous section methods of reducing the search yield, which will reduce the time needed for screening. The search results may remain high, and so the process of screening itself may need to be undertaken more rapidly. Commonly adopted rapid approaches include single-reviewer screening, at title and abstract and/or at full-text screening, saving considerable time and reducing the person days needed by half.13 Single screening, however, increases the risk that studies will be falsely excluded.49 50 In BPRs, a single screening process also may lead to bias as the process of screening in broad review questions frequently requires consultation and exploration between the team, KUs and commissioners. These risks can be mitigated by having two reviewers screen a proportion of records independently (eg, 20%). This allows differences to be discussed and if agreement is sufficient to proceed with single screening. Another option is to increase the team size, with a larger number of reviewers working simultaneously to conduct the screening and data extraction phase.51 In rapid BPRs where single screening is adopted, we recommend that frequent discussion and review of screening decisions are made by the team, with opportunities to discuss and record decisions and insights made.

The risk of missing studies in single-reviewer screening can, in SRs of effectiveness, alter the overall estimates of the effect.20 These risks may be considered differently in BPRs, where data is not statistically or qualitatively synthesised. The impact of missing studies may not greatly change the overall landscape view, and the risk of missing studies might be tolerated by commissioners and KUs to achieve a timely output. Caution should be taken, however, when a BPR precedes an SR, and the included studies are drawn from the BPR. In these cases, the risks of missing studies will have considerable implications for the reliability of the SR. In such cases, an updated literature search is also recommended. Once again, the rapid approaches need to be tailored to the review objectives, and the methods and their limitations clearly conveyed.

Technologies aimed at streamlining the screening processes have been developed, and the field is evolving rapidly with new tools emerging with promising results for supporting screening with greater reliability.52 Active learning is a semi-automated process that can identify potentially eligible studies more rapidly than conventional screening methods with the same investment of workload.53 Active learning then enables the re-ordering of references and prioritises likely relevant research. This can be used to support, for example, a dual-screening approach towards the beginning of the project, which will increase the chance of two reviewers evaluating likely relevant references, and switching to single screening at an agreed timeframe, which will reduce the time needed to review irrelevant references. Machine-learning based tools such as Rayyan,54 Abstrackr,55 DistillerSR56 and RobotAnalyst57 provide this service; however, the baseline risk of missing relevant studies remains, and there is no clarity of when to switch between dual and single screening. Where tools are used, we recommend that validated tools are selected, and the validity references and metrics are reported.

A further recent development is the use of statistical algorithms to estimate the sensitivity of having identified all relevant references (ie, completeness of screening process),58 59 and tools to enable this approach to be used are being integrated into commercially available tools.60 This approach would enable reviewers to stop screening, confident that all relevant studies had been identified. The reduction of screening burden has been reported to be up to 40% within one tool,60 but bigger evaluation datasets and fair algorithm comparisons with standardised evaluations are likely to improve methods and the adoption into screening tools.61 It should be noted that the implications for BPRs are less well understood. The algorithms look for similarity, locating studies most likely the ones that are deemed to be relevant to the review question. BPRs are often used to start building the ‘evidence architecture’, ie, to create the foundations guiding subsequent research and therefore are often exploratory. For example, in a scoping review seeking to explore how ‘good mental health’62 is operationalised in the literature, the use of a tool that filters those most likely included studies may result in a body of studies that does not reflect the spectrum of ways the term is actually operationalised.

Other tools can also support the review process offering features such as team management and conflict resolution for disagreements (EPPI-Reviewer,63 Covidence64). We recommend that RR teams develop skills in using technologies prior to applying them in RR contexts.

It is important to remember that the process of screening can be very valuable in enlightening the review team to the literature in the field. Team participation in the decision-making processes during screening in BPRs can form an important component of getting a ‘feel’ for the topic which needs to be balanced with short timeframes to complete the review.

Data extraction/coding

Data extraction or coding (a term used to describe the process of data extraction in EGMs) is considered another time-intensive step in the conduct of BPRs, though potentially less time consuming that in a conventional SR. The complexity and amount of data extracted can vary considerably in BPRs, from time-intensive extraction of text-based data (such as how a concept is used) to ‘superficial’ data (such as the country in which the study was undertaken or year of publication). Most reviews will have a combination of both, but where time is limited and/or the number of included studies is substantial, limiting the amount of in-depth data extraction will speed up the process. Limiting data extraction may require trading generalisability and usefulness of the review to wider audiences (beyond those commissioning the review) with meeting review deadlines. Again, agreeing on the protocol, piloting a data extraction form including commissioner feedback and including team members with review expertise are particularly important in managing these trade-offs. In rapid contexts, a focus should remain on capturing data relevant to the objectives of the review and context in which they will be applied to minimise the data that needs to be coded. One example of a pragmatic approach to achieve tight timeframes is to limit the extraction to information available in the study abstract only.65 This may not be advisable in all contexts, as data may not always be fully reported in the abstract; therefore, these decisions need to be considered carefully.

As in all types of evidence synthesis, we recommend that two reviewers should conduct data extraction independently to reduce the risk of error or bias and ensuring consistency in the interpretation of the coding tool or data extraction form. Data extraction completed well and with limited error or confusion will make data analysis easier and less time consuming. Single-person assessment with a verification of only a proportion of data extracted or coded is an option where time does not allow full dual workflows. As synthesis of outcomes is not the purpose of these types of reviews, the risk of errors arising with single-reviewer data extraction may not have the same impact as in an effective SR. These decisions and their implications again need to be discussed with commissioners and reported in the review.

There are practical measures that can be taken to assist in making time efficiencies without increasing the risks to the rigour of the review. These include the use of dual monitors in data extraction/coding,66 providing a detailed instruction in an explanatory document on how to collect the data, for example, agreeing how a country will be reported (America, USA, United States). Software, such as Covidence,64 EPPI-Reviewer63 or Colandr,67 allow for the development of extraction tables where pre-filled responses can be created. Anticipating how the data will be presented may make the data extraction process quicker; for example, if some categorisation is going to be used (eg, global regions), data can be extracted directly into the aligned category. A data extraction form or coding tool (referring to the digital form created in a programme like EPPI reviewer63 or Covidence)64 should be piloted in both a standard and an RR context. Unexpected differences between reviewers’ interpretations or presentation of data can occur, and standardisation can save time.

Developments in text mining automation and machine learning are likely to improve the time taken in screening studies and in data extraction.68 69 Though limited, evidence comparing rapid approaches in mapping reviews (only screening titles and abstracts, semi-automation of data extraction) found that although the number of identified studies differed, the overall conclusions and identified gaps were concordant. The time saved (65 person hours) was also substantial.70

The use of Generative Artificial Intelligence (AI) tools using large language models (LLMs) to support data extraction may offer approaches that can compensate for some of the additional risks that single-reviewer data extraction might introduce. There may be elements of the data extraction process that can be assisted more readily with AI. For example, LLMs have been used to semi-automate the process of data extraction, with sufficient accuracy to potentially act as a second reviewer.71 Where used, the AI system, version, dates and details of how they have been used should be reported. For further reference, an earlier paper in this series72 73 provides additional information about automation methods during data extraction.

Data analysis and study quality

BPRs differ from other review types in their approach to the analysis of the extracted data. They are descriptive in their purpose, providing a map of the available evidence and not synthesising results into a set of final estimates of effects or the synthesised findings of qualitative data. This often requires frequency counts, describing emerging patterns from the data. This may include organising qualitative data into categories using either a predetermined coding structure or framework or creating one that emerges from the data.74

BPRs also differ from other evidence synthesis approaches as they may not conduct quality appraisal or risk-of-bias assessment. Where an assessment of the study quality is incorporated, the role of the assessment is not to explore its effect on the synthesised findings but to describe the current body of evidence. Given the large number of included studies in BPRs, we would recommend risk-of-bias or quality assessment only be included in the review where there is sound rationale for doing so. An approach often adopted in BPRs is to provide an overview of the study designs used to investigate a particular area, therefore highlighting the types of knowledge gaps that exist.75 We would recommend that the classification of study designs be undertaken by two reviewers working independently, until there is good agreement between reviewers if time does not allow dual working. Where undertaken, we would also recommend adequate training, pilot testing and documentation of decision rules. Classifying the study design might itself be time consuming and, again, only undertaken if this is pertinent to addressing the research objectives and commissioner needs.

Reporting, data visualisation and presentation

The reporting stage of BPRs is often accompanied by visualisations (ie, graphical representation of different pieces of information or data) that support the summary of the extracted data. Clear dialogue with commissioners and KUs should inform how the data is collated, described and visualised. Visualisation is a particularly useful tool for a more accessible and readable report. Data might be presented in pie charts, bubble plots, tables, graphs, heat maps and word clouds.76 The review team should include those with expertise in reporting BPRs and data visualisation skills, considering accessibility for colour-blind readers. There exists a growing array of software resources that might be useful in both rapid and standard BPR approaches, and the Systematic Review Toolbox77 provides an on-line regularly updated catalogue of tools that can support the review process.

In a rapid context, commissioners may prefer key findings to be presented in very concise and readable summaries and follow existing report formats.78 The methods used to conduct the review should also be transparently reported, including the potential risks to the generalisability and rigour of the review findings. One approach to reporting the methods includes placing the description of methods at the back of the report, and key findings highlighted in summary tables.79 In preparation for publication and wider dissemination, PRISMA-ScR12 is currently the recommended tool to inform reporting of the review.

Conclusion

There is an increasing use of BPRs being undertaken to inform decision-making and guide future research, often forming the initial stage in generating the evidence architecture to inform policy and practice. While the recommendations for rapid approaches in SRs can inform rapid BPRs, there are features of these types of reviews where the implications of rapid approaches differ. BPRs address broad research questions, resulting in large search yields and a large proportion of the time budget dedicated to screening. Therefore, working closely with commissioners and KUs to find acceptable limits to the scope of the research question is particularly important. The context in which the findings of the review will be applied can guide the limitations imposed. Methods to accelerate the process of screening often have a greater priority in rapid BPRs as search yields are generally larger than in reviews which have a narrower focus. The methods of analysis differ, with descriptive and narrative description of the evidence, and often accompanying visuals to communicate findings. In rapid contexts, to support timely decision-making, the number of research questions will be narrowed, therefore limiting the data extraction and analysis of findings. Reporting should always include a description of rapid methods used and the implications of these for the review findings. Table 3 summarises the recommendations for undertaking rapid BPRs.

While rapid approaches have a place, they should be used with caution. The trade-offs between the risk of error and bias and time saved will not be the same in all BPRs. Approaches should be tailored depending on the review question and topic, KU and commissioner requirements and the resources available. While there is an increasing body of research to guide understanding on how RR processes might influence the risks and benefits to evidence informed decision-making,22 few have been tested in BPRs. Furthermore, innovations in automation and semi-automation have been primarily developed for undertaking SRs of effectiveness. There is a need to evaluate tools to support BPRs, where the objective is to understand the breadth of a topic. Methods of including KUs in rapid approaches also need to be explored and better documented so that shared learning can improve practice.

Where methodological short cuts compromise comprehensiveness, rigour and objectivity, transparency can be maintained with detailed reporting of the reasons for a rapid approach and the nature of the rapid methods adopted. As the volume of evidence increases, and the demand for timely responses to informed decision-making timeframes, the requirement to produce rapid BPRs will grow. The challenge is to develop rigour in understanding how we can gain time efficiencies while still producing reliable and trustworthy outputs. Transparency in methods is a core attribute of evidence synthesis and remains so in both rapid and non-rapid BPR processes.

Data availability statement

Data are available upon reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

Acknowledgments

We would like to thank Barbara Nussbaumer-Streit, Gerald Gartlehner and Katie Thomson for their comments on a draft of this paper.

References

Footnotes

  • X @FionaBell19, @Daniellep89

  • Correction notice This article has been corrected since it was published. The title has been corrected.

  • Contributors FC conceived the idea for the paper and managed the development of the script. FC, AS, DP, LS and HK contributed to the text, and all of the authors commented on the full draft. FC acts as guarantor and is responsible for overall content.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests Andrea Tricco is funded by the Tier 2 Canada Research Chair in Knowledge Synthesis.

  • Patient and public involvement Patients and/or the public were not involved in the design, conduct, reporting or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • https://osf.io/ (last accessed 03/09/2024)

  • https://dataverse.harvard.edu/ (last accessed 03/09/2024)