Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof

We're the ideal place for homework help. If you are looking for affordable, custom-written, high-quality and non-plagiarized papers, your student life just became easier with us. Click either of the buttons below to place your order.


Order a Similar Paper Order a Different Paper

Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Professionals Guidelines. Use these resources to appraise the evidence you gathered in your search of the evidence and construct an evidence table using Appendix D Hierarchy of Evidence Guide and Appendix E Research Evidence Appraisal Tool and Appendix G Nonresearch Evidence Appraisal Tool.  A template for the Summary of Evidence Table is found in Appendix G Individual Evidence Summary Tool. Use this to guide development of your evidence table with the following columns:

  • Author and date (APA formatted)
  • Title of Article
  • Journal
  • Population, size (n)
  • Setting
  • Type of Evidence (ie: RCT, mixed method, quasi-experimental, qualitative, systematic review, practice guideline, etc…)
  • Description of Intervention
  • Outcome measures
  • Findings that Help Answer the EBP Question
  • Limitations
  • Evidence Level and Quality
  • Implications for Proposed Project

The evidence table will be included as an Appendix in your final proposal.

Include an APA formatted reference list of articles and resources used in developing the evidence table. The reference list will be integrated into the reference list for revised drafts and the final version of the proposal.

Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof
Information literacy is a set of abilities requiring individuals to “rec – ognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” (American Li – brary Association, 1989, para. 3). Developing information literacy skills requires knowledge of the nursing literature and an aptitude for locating and retrieving it. “Given the consistent need for curren t information in health care, frequently updated databases that hold the latest studies reported in journals are the best choices for finding r ele – vant evidence to answer compelling clinical questions” (Fineout-Over – holt et al., 2019, p. 60). Studies have shown that positive changes in a nurse’s information literacy skills and increased confidence in using those skills have a direct impact on appreciation and application of research, are vital for effective lifelong learning, and are a prerequis ite to evidence-based practice (McCulley & Jones, 2014). EBP teams can collect evidence from a variety of sources, including the web and proprietary databases. The information explosion has 5 Searching for Evidence Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 100 made it difficult for healthcare workers, researchers, educators, admi nistrators, and policymakers to process all the relevant literature available to the m every day. Evidence-based clinical resources, however, have made searching for medical information much easier and faster than in years past. This chapter: ■ Describes key information formats ■ Identifies steps to find evidence to answer EBP questions ■ Suggests information and evidence resources ■ Provides tips for search strategies ■ Suggests methods for screening and evaluating search results Key Information Formats Nursing, and healthcare in general, is awash with research data and reso urces in support of evidence-based nursing, which itself is continually evolving (Johnson, 2015). Evidence-based literature comes from many sources, and healthcar e professionals need to keep them all in mind. The literature search is a vital component of the EBP process. If practitioners search only a single reso urce, database, or journal, they will likely miss important evidence. Likewise , target searching for a specific intervention or outcome may exclude important alternative perspectives. Through a thorough and unbiased search, healthcare profess ionals expand their experience in locating evidence important to the care they deliver. Primary evidence is data generally collected from direct patient or subject contact, including hospital data and clinical trials. This evidence exis ts in peer- reviewed research journals, conference reports, abstracts, and monograph s, as well as summaries from data sets such as the Centers for Medicare & Medi caid Services (CMS) Minimum Data Set. Databases that include primary source evidence include PubMed, Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Cochrane Library, library catalogs, other bibliographic literature databases, and institutional repositories. For hospital admin istrators, the Healthcare Cost and Utilization Project (HCUP) is a source for health statistics and information on hospital inpatient and emergency department use. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 101 Evidence summaries include systematic reviews, integrative reviews, meta- analysis, meta-synthesis, and evidence syntheses. The literature summari es that identify, select, and critically appraise relevant research use specific analy ses to summarize the results of the studies. Evidence-based summaries also resi de in library catalogs, online book collections, and online resources such as PubMed, CINAHL, the Cochrane Library, and JBI (formerly known as the Joanna Briggs Institute). For hospital administrators and case managers, Health Busin ess Full Text is a source for quality improvement and financial information. Translation literature refers to evidence-based research findings that, after much investigation and analysis, professional organizations or multi-discipli nary panels of experts translate for use in the clinical practice settings. Translation literature formats include practice guidelines, protocols, standards, critical path ways, clini – cal innovations, and evidence-based care centers and are available throu gh peer- reviewed journals and bibliographic databases. JBI, CINAHL, and PubMed a lso provide this type of literature using Best Practice information sheets. The Answerable Question After identifying a practice problem and converting it into an answerabl e EBP question, the search for evidence begins with the following steps: 1. Identify the searchable keywords contained in the EBP question and list them on the Question Development Tool (see Appendix B). Include also any synonyms or related terms, and determine preliminary article inclu – sion criteria. 2. Identify the types of information needed to best answer the ques – tion, and list the sources where such information can be found. What database(s) will provide the best information to answer the question? 3. Develop a search strategy. 4. Evaluate search results for relevance to the EBP question. 5. Revise the search strategy as needed. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 102 6. Record the search strategy specifics (terms used, limits placed, year s searched) on the Question Development Tool and save the results. 7. Screen results to systematically include or exclude relevant literature. EBP Search Examples The first step in finding evidence is selecting appropriate search t erms from the answerable EBP question. It may be useful to have a sense of the inc lusion and exclusion criteria, which will guide the study selection. The Questi on Development Tool (Appendix B) facilitates this process by directing the team to identify the practice problem and, using the PICO components, to deve lop a searchable question. For example, consider the following background question: What are best p rac – tices to reduce the rates of medical device-related pressure injuries in hospital – ized adult patients? Some search terms to use may be hospital, pressure injuries, and devices. Table 5.1 illustrates how to use each part of an answerable PICO question to create an overall search strategy. Because the question addresses best practices that affect the outcomes, outcome search terms are not include d in the search to avoid bias. Table 5.1 PICO Example: Best Practices to Prevent Medical-Device Related Pressure Injuries PICO Elements Initial Search Terms Related Search Terms P: Adult hospitalized patients Inpatient* OR hospital* Ward OR unit OR floor OR nurs* I: Best practices to prevent medical-device- related pressure injuries Pressure injur* OR HAPI OR pressure ulcer* OR decubitis OR bedsore OR Braden scale device OR devices OR tube OR tubes OR tubing OR catheter OR catheters OR nasal cannula* OR restraint* OR tape Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 103 C: n/a O: Rates of medical device-related pressure injuries Table 5.2 displays another PICO question with each element mapped to pote ntial search terms to help answer the EBP foreground question: In patients wit h chest tubes, does petroleum impregnated gauze reduce catheter dwell time s as compared to a dry gauze dressing? In this case, the searcher may want to consider brand or product names to include in the search terms. Table 5.2 PICO Example: Chest Tube Dressing Comparison PICO Elements Initial Search Terms Related Search Terms P: Patients with chest tubes Chest tube* OR chest drain* Pleural catheter* OR pleura shunt* OR intercostal drain OR (vendor-specific catheter names) I: petroleum impregnated gauze Petroleum (vendor-specific name gauze) OR occlusive gauze C: dry gauze dressing Gauze dressing OR bandage* O: catheter dwell times dwell Time OR hour* OR day* Teams need to consider the full context surrounding the problem when thin king of search terms. As an example, intervention is a term used frequently in nursing; it encompasses the full range of activities a nurse undertakes in the ca re of patients. Searching the literature for nursing interventions, however, is far too general and requires a focus on specific interventions. Additionally, directional terms related to outcomes may bias the search. For example, including th e term “reduce” will exclude potentially valuable information that may sh ow alternative findings for the intervention in question. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 104 Selecting Information and Evidence Resources After selecting search terms, EBP teams can identify quality databases c ontaining information on the topic. This section briefly reviews some of the uni que features of core EBP databases in nursing and medicine. CINAHL CINAHL covers nursing, biomedicine, alternative or complementary medicin e, and 17 allied health disciplines. CINAHL indexes more than 5,400 journal s, contains more than 6 million records dating back to 1976, and has comple te coverage of English-language nursing journals and publications from the National League for Nursing and the American Nurses Association (ANA). In addition, CINAHL contains healthcare books, nursing dissertations, selec ted conference proceedings, standards of practice, and book chapters. Full-t ext material within CINAHL includes more than 70 journals in addition to leg al cases, clinical innovations, critical paths, drug records, research inst ruments, and clinical trials. CINAHL also contains a controlled vocabulary, CINAHL Subject Headings , which allows for more precise and accurate retrieval. Selected terms are searched using “MH” for Exact Subject Heading or “MM” for Exact Major Subject Heading. Additionally, CINAHL allows searching using detailed filters to nar – row results by publication type, age, gender, and language. The example PICO on pressure injuries could be searched using the CINAHL Headings. It may look like this: (MH “Pressure Ulcer”) AND (MH “Hospitalization” OR MH “Inpatients”) MEDLINE and PubMed MEDLINE and PubMed are often used interchangeably; however, teams need to keep in mind that they are not the same. PubMed is a free platform available through the National Library of Medicine’s interface that searches not only MEDLINE but also articles not yet indexed in MEDLINE and articles that a re included in PubMed Central. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 105 MEDLINE, a database that contains over 30 million references (as of May 2020), includes journal articles in the life sciences with a concentrat ion on bio – medical research. One of MEDLINE’s most notable features is an extensive, controlled vocabulary: Medical Subject Headings (MeSH). An indexer, who is a specialist in a biomedical field, reviews each record in MEDLINE. The indexer assigns 5–15 appropriate MeSH terms to every record, which allows for precise searching and discovery. MeSH terms can help the searcher account for ambigui – ty and variations in spelling and language. A common saying in the libra ry world is: “Garbage in, garbage out.” MeSH terms can eliminate “garbag e,” or irrel – evant articles. Searchers should be aware that not every record in PubMe d re – ceives MeSH indexing, so searching best practice involves including both MeSH terms and additional keywords for optimal evidence discovery. To search in MEDLINE through PubMed using the PICO example for medical device-caused pressure injuries in hospitalized patients, one would use the MeSH term for “Pressure Ulcer” [MeSH] as the basis for the first concept. For the sec – ond concept, the searcher could select the MeSH terms “Hospitalization”[MeSH] and “Inpatients”[MeSH] to describe the concept of hospitalized patients, or al – ternatively, the MeSH term for “Equipment and Supplies”[MeSH]. There is no MeSH term for medical device, but “equipment and supplies” should capture some of that literature. A PubMed search strategy utilizing MeSH terms w ould look like this: (“Pressure Ulcer”[MeSH]) AND (“Hospitalization”[MeSH] OR “Inpatients”[MeSH]) PubMed also contains Clinical Queries , a feature that has prebuilt evidence- based filters. Clinical Queries uses these filters to find relevant information on topics relating to one of five clinical study categories: therapy, diagnosis, etiology, prognosis, and clinical prediction guides. Clinical Queries also includes a search filter for systematic reviews. This filter adds predetermined limits to narrow the results to systematic reviews. To use the Clinical Queries feature, access it through the PubMed homepage, and enter the search as normal. Clinical Queries are beneficial only if a search has already been built. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 106 The Cochrane Library The Cochrane Library is a collection of databases that includes the Coch rane Database of Systematic Reviews. Internationally recognized as the gold s tandard in evidence-based healthcare, Cochrane Reviews investigate the effects o f interventions for prevention, treatment, and rehabilitation. Over 7,500 Cochrane Reviews are currently available. Published reviews include an abstract, plain language summary, summaries of findings, and a detailed account of the review process, analysis, and conclusions. Abstracts of reviews are available f ree of charge from the Cochrane website; full reviews require a subscription. A medical librarian can identify an organization’s access to this resource. JBI (formerly known as the Joanna Briggs Institute) JBI is an international, not-for-profit, membership-based research and development organization. Part of the Faculty of Health Sciences at the University of Adelaide, South Australia, JBI collaborates internationall y with over 70 entities. The Institute and its collaborating entities promote a nd support the synthesis, transfer, and utilization of evidence by identifying feasible, appropriate, meaningful, and effective practices to improve healthcare o utcomes globally. JBI includes evidence summaries and Best Practice information sheets produced specially for health professionals using evidence reported in s ystematic reviews. JBI resources and tools are available only by subscription thro ugh Ovid, and a medical librarian can identify an organization’s access to this resource. Selecting Resources Outside of the Nursing Literature At times, it may become necessary to expand searches beyond the core nur sing literature. Databases of a more general and multidisciplinary nature are presented in this section. PsycINFO PsycINFO is a database supported by the American Psychological Associati on that focuses on research in psychology and the behavioral and social sci ences. PsycINFO contains more than 4.8 million records, including journal artic les, Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 107 book chapters, book reviews, editorials, clinical case reports, empirica l studies, and literature reviews. The controlled vocabulary for PsycINFO is availa ble through the Thesaurus feature. Users can search Thesaurus terms as major headings and explore terms to search for terms that are related and more specific. Users can limit searches in PsycINFO by record type, methodology, language, and age to allow for a more targeted search. Health and Psychosocial Instruments (HaPI) The HaPI database contains over 200,000 records for scholarly journals, books, and technical reports. HaPI, produced by the Behavioral Measureme nt Database Services, provides behavioral measurement instruments for use i n the nursing, public health, psychology, social work, communication, sociology, and organizational behavior fields. Physiotherapy Evidence Database (PEDro) PEDro is a free resource that contains over 45,000 citations for randomi zed trials, systematic reviews, and clinical practice guidelines for physiotherapy. The Center for Evidence-Based Physiotherapy at the George Institute for Global Heal th produces this database and attempts to provide links to full text, when possible, for each citation in the database. Creating Search Strategies and Utilizing Free Resources In the following section, we will cover the necessary components used to create a robust search strategy. Databases are unique, so the components to select when creating a search strategy will vary; not every search strategy will uti lize every component. The end of this section includes a list of free, reliable res ources with descriptions explaining the content available in each resource. Remember to check with a local medical library to see what additional resources may be ava ilable. Key Components to Creating Search Strategies After identifying appropriate resources to answer the question, the EBP team can begin to create a search strategy. Keep in mind that this strategy may need Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 108 adjustment for each database. Begin by breaking the question into concep ts, se – lecting keywords and phrases that describe the concepts, and identifying appro – priate controlled vocabulary, if available. Use Boolean operators (AND, OR , and NOT ) to combine or exclude concepts. Remember to include spelling variatio ns and limits where necessary. Note the search strategy examples’ search strategies for PubMed and CINAHL in the previous sections were created by combining the AND and OR Boolean operators. Use OR to combine keywords and controlled vocabulary related to the same concept (example searches formatted for PubMed): (“Pressure Ulcer”[MeSH] OR “pressure ulcer*” OR “pressur e injur*” OR “decubitis”) Use AND to combine two separate concepts: (“Pressure Ulcer”[MeSH] OR “pressure ulcer*” OR “pressur e injur*” OR “decubitis”) AND (“Hospitalization”[MeSH] OR “Inpatients”[MeSH] OR “hospital*” OR “inpatient*”) Review the following steps to build a thorough search strategy: 1. Use a controlled vocabulary when possible. Controlled vocabularies are specific subject headings used to index concepts within a database. They are essential tools because they ensure consistency and reduce ambiguity where the same concept may have different names. Additionally, they often improve the accuracy of keyword searching by reducing irrelevant items in the retrieval list. Some well-known vocabularies are MeSH in MEDLINE (PubMed) and CINAHL Subject Headings in CINAHL. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 109 2. Choose appropriate keywords for the search’s concepts. Look at key articles to see how they use the terms that define the top – ics. Think of possible alternative spellings, acronyms, and synonyms. Remember that even within the English language, spelling variations in American and British literature exist. In British literature, Ss often re – place Zs, and OU s often replace Os. Two examples of this are organisa – tion versus organization and behaviour versus behavior . 3. Use Boolean operators. Boolean operators are AND, OR, and NOT. Use OR to combine key – words and phrases with controlled vocabulary. Use AND to combine each of the concepts within the search. Use NOT to exclude keywords and phrases; use this operator with discretion to avoid excluding terms that are relevant to the topic. (See Table 5.3.) 4. Use filters where appropriate. Most databases have extensive filters. PubMed and CINAHL allow filtering by age, gender, species, date of publication, and language. The filter for publication types assists in selecting the highest levels o f evi – dence: systematic reviews, meta-analyses, practice guidelines, random – ized controlled trials, and controlled clinical trials. Apply filters carefully and with justification, because it is easy to exclude something import ant when too many filters are applied. 5. Revise the search. As the team moves through Steps 1–4, they are likely to find new te rms, alternate spellings of keywords, and related concepts. Revise the search to incorporate these changes. The search is only complete when the team can answer the question. If given a previous search to update, the team may need to make revisions because terms may have changed over time, and new related areas of research may have developed. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 110 Table 5.3 Using Boolean Operators Boolean Operator Venn Diagram Explanation Example AND Use AND to link ideas and concepts where you want to see both ideas or concepts in your search results. The area in yellow (in ebook) or gray (print book) on the diagram highlights the recall of the search when AND is used to combine words or concepts. As you can see, AND narrows the search. “pressure ulcer” AND “hospitalization”   OR Use OR between similar keywords, like synonyms, acronyms, and variations in spelling within the same idea or concept. The area in yellow (in ebook) or gray (print book) on the diagram highlights the recall of the search when OR is used to combine words or concepts. As you can see, OR broadens the search. “pressure ulcer” OR “pressure injury” NOT NOT is used to exclude specific keywords from the search; however, you will want to use NOT with caution because you may end up missing something important. The area in yellow (in ebook) or gray (print book) on the diagram shows the search results you get when combining two concepts using NOT. As you can see, NOT is used to make broad exclusions. “pressure injury” NOT “crush injury” Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 111 Free Resources Most databases require a paid subscription, but some are freely searchab le online. Table 5.4 lists quality web resources available at no cost. Check the loc al medical or public library to see what is accessible. Many databases are available through multiple search platforms. For example, teams can search the MEDLINE database through PubMed, but it is also available through other vendors and their platforms, such as EBSCOhost, ProQuest, and Ovid. Medical librarians, knowledgeable about available resources and how to u se their functions, can assist in the search for evidence and provide inval uable personalized instruction. Never be afraid to ask for help! The only fool ish question is the one unasked. Table 5.4 Free Online Resources Resource Focus Website PubMed Biomedical Research https://pubmed.ncbi.nlm.nih.gov PubMed Central Full-Text Biomedical Resources https://www.ncbi.nlm.nih.gov/pmc Sigma Repository Research, Dissertations, and Conference Abstracts https://www.sigmarepository.org The Cochrane Collaboration Systematic Reviews and Controlled Trials https://www.cochranelibrary.com TRIP Medical Database (Turning Research Into Practice) Clinical Practice https://www.tripdatabase.com US Preventive Services Task Force (USPSTF) Clinician and Consumer Information http://www.uspreventiveservicestaskforce.org ClinicalTrials.gov Clinical Trials https://clinicaltrials.gov continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 112 Resource Focus Website NIH RePORTER: Research Portfolio Online Reporting Tool Federally Funded Research Projects http://report.nih.gov Google Scholar Multidisciplinary Resources http://scholar.google.com The Sigma Repository (formerly the Virginia Henderson Global Nursing e-Repository) is a resource offered through the Honor Society of Nursing, Sigma Theta Tau International. It gives nurses online access to easily utilized and r eliable information. Primary investigator contact information is also available for requests of full-text versions of studies. The Sigma Repository also pro vides a list of tools, instruments, and measurements useful to nurses. The TRIP Medical Database is a clinical search engine designed to allow health professionals to rapidly identify the highest-quality evidence for clini cal practice. It searches hundreds of evidence-based medicine and nursing websites tha t contain synopses, clinical answers, textbook information, clinical calcu lators, systematic reviews, and guidelines. The US Preventive Services Task Force (USPSTF), created in 1984, is an independent, volunteer panel of national experts in prevention and evide nce- based medicine. It works to improve health by making evidence-based recommendations on clinical preventive services such as screenings, coun seling services, and preventive medications, drawing from preventive medicine a nd primary care including family medicine, pediatrics, behavioral health, o bstetrics and gynecology, and nursing. Clinical Trials.gov is a resource providing patients, healthcare professionals, researchers, and the public with access to publicly and privately suppor ted clinical Table 5.4 Free Online Resources (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 113 studies for a variety of health conditions and diseases. The National Li brary of Medicine (NLM) and the National Institutes of Health (NIH) mainta in this resource. The principal investigator (PI) of the clinical study u pdates and provides information on the studies included. Currently, ClinicalTrials.gov contains data from studies conducted in all 50 states and over 216 count ries. Searchers can look for studies that are currently recruiting as well as completed clinical studies. NIH Research Portfolio Online Reporting Tool (RePORTER) is a federal gov – ernment database that lists biomedical research projects funded by the N IH as well as the Centers for Disease Control and Prevention (CDC), Agenc y for Healthcare Research and Quality (AHRQ), Health Resources and Services Ad – ministration (HRSA), Substance Abuse and Mental Health Services Admini s – tration (SAMHSA), and US Department of Veterans Affairs (VA). RePORTER allows extensive field searching, hit lists that can be sorted and dow nloaded in Microsoft Excel, NIH funding for each project (expenditures), and publ ications and patents that have acknowledged support from each project (results) . Google Scholar is not associated with a hospital or academic library. It is a search aggregator that returns open and subscription results, including grey liter – ature, such as conference proceedings, white papers, unpublished trial d ata, gov – ernment publications and reports, and dissertations and theses. Google S cholar allows a broad search across many disciplines, searching academic publis hers, professional societies, online repositories, universities, and other web sites for articles, theses, books, abstracts, and court opinions. Searches can inc lude non – medical terms, and due to its multidisciplinary nature, content can be a ccessed related to non-nursing subject matter. Google Scholar ranks documents by weighting the full text, publisher, and author(s), as well as how recently and fre – quently they are cited in other scholarly literature. Though Google Scholar can be simple to use because of its familiarity an d the wide use of Google as a search engine, the EBP team must be cautious. Se arch algorithms change daily, but journals are not indexed, making it impossible to Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 114 replicate a search. As with a database, the EBP team must realize that s earch – ing using only Google Scholar will result in insufficient evidence (G usenbauer & Haddaway, 2020). With these caveats in mind, the EBP team can reap some ad – ditional benefits when using Google Scholar: ■ A helpful feature is the user’s ability to set up a Google Scholar profile, from which routine alerts for relevant search terms can be set up and sent via email notification. For example, if the EBP team is searching the literature using search terms fall risk and acute care , a recommendation rule can be set up so that any time Google adds a new document, an email can be sent directly to the EBP team, alerting them to this information. ■ Google Scholar has the ability to both generate citations as well as export citations to citation managers to help keep track of references. By clicking on the closed quotation icon (”) under a citation recor d, a pop-up will appear with a citation for the relevant document in a variet y of output styles. Teams can directly copy or export these citations into a citation manager. ■ Another benefit in Google Scholar is the “cited by” feature. By clicking on this option, Google Scholar will display publications that have cited the selected piece of literature. This can be helpful when trying to ide ntify recent literature or other articles that may have used similar methods. Additional Search Techniques It is important to note that not all literature is found through databas e searching, either due to indexing or because the results were presented as a confer ence paper or poster and therefore not found in most databases. Because of th is, the team can gain valuable information by: ■ Hand searching the table of contents of subject-related, peer-reviewed journals, and conference proceedings Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 115 ■ Evaluating the reference list of books and articles cited in the eligibl e articles ■ Searching for references citing relevant articles and evaluating the reference lists Screening, Evaluating, Revising, and Storing Search Results Whether EBP team members conduct a search independently or with the help of a medical librarian, it is the searchers’ responsibility to evalua te the results for relevance based on the practice question as well as inclusion and ex clusion criteria . Keep in mind that to answer the question thoroughly, a team’s search strategies may need several revisions, and they should allow adequate ti me for these alterations. When revising a search, consider these questions: ■ When was the last search conducted? If the search is several years old, you need to consider changes that may have happened in the field that were missed by the previous search. ■ Have new terms been developed related to your search question? Terms often change. Even controlled vocabulary such as MeSH is updated annually. Make sure to search for new controlled vocabulary and new keywords. ■ Did the search include databases beyond the nursing literature? Are there databases outside of nursing that are relevant to your question? Does your question branch into psychology or physical therapy? Were those databases searched previously? ■ Are the limits used in the first search still appropriate? If an age range limit was used in the last search, is it still relevant? Were there restrictions on publication type or methodology that are no longer useful? Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 116 After creating a successful search strategy, teams or individuals should keep a record of the work. Often individuals research the same topic throughout their career, so saving search strategies assists in updating work without duplicati on of effort. Most databases have a feature that allows saving a search wit hin the database; however, it is always a good idea to keep multiple copies of searches. Microsoft Word documents or emails are a great way to keep a record of work. PubMed is an example of a database that allows users to save multiple se arches . My NCBI , a companion piece to PubMed, permits users to create an account, save searches, set alerts, and customize preferences. Users can save sea rch results by exporting them into a citation management software program such as En d – Note, RefWorks, Zotero, Mendeley, or Papers. Though some citation manage – ment programs are free, others need to be purchased; some may be provide d at no cost by an organization. The function and capabilities of the various pro – grams are similar. Once the team downloads and stores search results, the next step is scre ening those results. Systematically and rigorously screening literature search results lends credence to the eventual recommendations from the project by ensur ing a comprehensive and unbiased picture of the state of the evidence on a giv en topic, as well as saving time and effort among team members (Lefebvre et al., 2019; Whittemore & Knafl, 2005). In addition to the question of rigor, comprehensive literature reviews can be a large undertaking with hundreds, if not thousands, of results. Screening is a nec – essary step to narrow the results of a search to only the pertinent resu lts. It is im – portant to note that a very large number of articles may be a sign that the search strategy or the practice question needs to be revised. Employing a team- based literature screening protocol and tracking mechanism helps to establish clear documentation of the process and makes screening more efficient and re liable. Following a step-wise approach of reviewing titles and abstracts, then f ull-text reading, conservatively allows a team to screen 500–1,000 articles ov er an eight- hour period (Lefebvre et al., 2019). Quickly culling superfluous res ults helps the team to home in on information truly relevant to the identified proble m. With Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 117 many competing priorities, EBP teams should avoid spending time consider ing articles that do not answer the EBP question through thoughtful approach es to the process. The following steps outline how teams can create a well-documented liter ature screening process. These steps have been adapted from the Cochrane Handbook for Systematic Reviews of Interventions and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, which were c re – ated to improve reporting of systematic reviews (Lefebvre et al., 2019; Moher et al., 2009). While systematic reviews and EBP integrative reviews are di fferent, they share the common goal of synthesizing the best evidence and are sub ject to similar risks. The EBP team meets to: 1. Establish inclusion and exclusion criteria for the literature screening. Commonly used exclusion criteria include population, language, date, intervention, outcomes, or setting. The team engages in critical thinkin g as they evaluate each article’s potential contribution to answering the EBP question. 2. Establish a system (or way) for the EBP team to track the process for literature screening. If the team plans to publish their results, they may consider using tool s such as Microsoft Excel to format search results and to function as a shared document for coding inclusion/exclusion. Including information related to the inclusion/exclusion criteria, duplicates removed, number of articles excluded at each stage, and any additional investigations outside of the systematic database search (e.g., hand searching) can strengthen the reporting of an EBP project. For systematic literature re – views, tools such as Covidence, Rayyan, and Abstrackr are available. 3. Decide how to complete the screening process. The literature screening process can be divided among all members of the EBP team. Best practice is to have at least two people review and agree whether an article should be included or excluded (Lefebvre et al ., 2019; Polanin et al., 2019). Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 118 4. Begin by screening only the title and abstracts produced from the search. Performing a title screening and then returning to the remaining article s for an abstract screening can save time and distraction (Mateen et al., 2013). A key to remember in this step is that the article is included u ntil proven otherwise. Exclude only those articles that concretely meet one of the exclusion criteria or do not answer the EBP question. 5. While screening, take notes. As the team delves deeper into the literature, they will gain a greater understanding of the available knowledge on a topic and the relevant vocabulary related to the practice question. Communicate as needed with the team members to make further group decisions on what to include and exclude in the search. There is no right or wrong answer; it is important that all group members have a common understanding of the scope of the project and that any changes or updates are well docu – mented (de Souza et al., 2010). 6. After identifying citations, obtain full text. If full text is not available, submit the citation information through an interlibrary loan request with a local library. Prices associated with interlibrary loan vary with each library and each request. Contact a librarian for pricing inquiries. Some sites, such as EndNote Click and Unpaywall, can provide free, legal access to full text. 7. Complete full-text screening for all articles that remain after screenin g the titles and abstracts. Assigned reviewers complete a full-text reading of the articles to con – tinue to determine whether they answer the EBP question and meet all inclusion criteria. This is an objective assessment and does not take in to account the quality of the evidence. It is normal to continue to exclude articles throughout this stage. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 119 Summary This chapter illustrates how to use the PICO framework as a guide for li terature searches. An essential component of evidence-based practice, the literat ure search is important to any research-and-publication activity because it enables the team to acquire a better understanding of the topic and an awareness of relev ant literature. Information specialists, such as medical librarians, can hel p with com – plex search strategies and information retrieval. Ideally, an iterative search process is used and includes: examining literature databases, using appropriate search terms, studying the resulting articl es, and, fi – nally, refining the searches for optimal retrieval. The use of keywords, co ntrolled vocabulary, Boolean operators, and filters plays an important role in finding the most relevant material for the practice problem. Database alerting servi ces are effective in helping researchers keep up to date with a research topic. Exploring and selecting from the wide array of published information can be a time – consuming task, so plan carefully in order to carry out this work effect ively. References American Library Association. (1989). Presidential committee on information literacy: Final report . http://www.ala.org/acrl/publications/whitepapers/presidential de Souza, M. T., da Silva, M. D., & de Carvalho, R. (2010). Integrative review: What is it? How to do it? Einstein (São Paulo) , 8(1), 102–106. https://doi.org/10.1590/S1679-45082010RW1134 Fineout-Overholt, E., Berryman, D. R., Hofstetter, S., & Sollenberger, J. (2019). Finding relevant evidence to answer clinical questions. In B. Mazurek Melnyk, & E. Fineou t-Overholt (Eds.), Evidence-based practice in nursing & healthcare: A guide to best practic e (pp. 36–63). Wolters Kluwer Health. Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of g oogle scholar, PubMed, and 26 other resources. Research Synthesis Methods , 11(2), 181–217. https://doi.org/10.1002/ jrsm.1378 Johnson, J. H. (2015). Evidence-based practice. In M. J. Smith, R. Car penter, & J. J. Fitzpatrick (Eds.), Encyclopedia of nursing education (1st ed.), (pp. 144–146). Springer Publishing Company, LLC. Lefebvre, C., Glanville, J., Briscoe, S., Littlewood, A., Marshall, C., Metzendorf, M. I., Noel-Storr, A., Rader, T., Shokraneh, F., Thomas, J., & Wieland, L. S. (2019). Searching for and selecting studies. In Cochrane handbook for systematic reviews of interventions (pp. 67–107). https:// training.cochrane.org/handbook/current/chapter-04#section-4-6-5 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 5 Searching for Evidence 120 Mateen, F., Oh, J., Tergas, A., Bhayani, N., & Kamdar, B. (2013). Titles versus titles and abstracts for initial screening of articles for systematic reviews. Clinical Epidemiology , 5(1), 89–95. https:// doi.org/10.2147/CLEP.S43118 McCulley, C., & Jones, M. (2014). Fostering RN-to-BSN students’ confidenc e in searching online for scholarly information on evidence-based practice. The Journal of Continuing Education in Nursing , 45 (1), 22–27. http://dx.doi.org/10.3928/00220124-20131223-01 Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & the PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA sta tement. PLOS Medicine , 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097 Polanin, J. R., Pigott, T. D., Espelage, D. L., & Grotpeter, J. K. (2019). Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analys es. Research Synthesis Methods , 10(3), 330–342. https://doi.org/10.1002/jrsm.1354 Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced Nursing , 52 (5), 546–553. https://doi.org/10.1111/j.1365-2648.2005.03621.x Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:03:37. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition
Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof
Evidence-rating schemes consider scientific evidence—also referred to as research —to be the strongest form of evidence. The underly – ing assumption is that recommendations from higher levels of high- quality evidence will be more likely to represent best practices. While comparatively stronger than nonresearch evidence, the strength of re – search (scientific) evidence can vary between studies depending upon the methods used and the quality of reporting by the researchers. The EBP team begins its evidence search in the hope of finding the highest level of scientific evidence available on the topic of interest. This chapter provides: ■ An overview of the various types of research approaches, designs, and methods ■ Guidance on how to appraise the level and quality of research evidence to determine its overall strength ■ Tips and tools for reading and evaluating research evidence 6 Evidence Appraisal: Research Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 130 Types of Scientific Research The goal of research is to produce new knowledge that can be generalized to a wider population by systematically following the scientific method. Research approaches are the general frameworks researchers use to structure a study and collect and analyze data (Polit & Beck, 2017). These fall in three broad categories: quantitative, qualitative, and mixed methods. Researchers us e these approaches across the spectrum of research designs (e.g., experimental, quasi- experimental, descriptive), which primarily dictate the research method s used to gather, analyze, interpret, and validate data during the study. The chosen technique will depend on the research question as well as the investigat ors’ background, worldviews (paradigms), and goals (Polit & Beck, 2017). Quantitative Research Most scientific disciplines predominantly use a quantitative research approach to examine relationships among variables. This approach aims to establis h laws of behavior and phenomena that are generalizable across different settin gs and contexts. This research approach uses objective and precise collection s uch as observation, surveys, interviews, documents, audiovisuals, or polls to m easure data quantity or amount. Through numerical comparisons and statistical inferences, data analysis allows researchers to describe, predict, test hypotheses, classify features, and construct models and figures to explain what th ey observe. Qualitative Research Qualitative research approaches, rooted in sociology and anthropology, seek to explore the meaning individuals, groups, and cultures attribute to a social or human problem (Creswell & Creswell, 2018). Thus, the researcher stu dies people and groups in their natural setting and obtains data from an info rmants’ perspective. Using a systematic subjective approach to describe life exp eriences, qualitative researchers are the primary data collection instrument. By a nalyzing data, they attempt to make sense of or interpret phenomena in terms of t he meanings people bring to them. In contrast to quantitative research, qua litative studies do not seek to provide representative data but rather informatio n saturation. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 131 Mixed-Methods Research A mixed-methods research approach intentionally incorporates or “mixes” both quantitative and qualitative designs and data in a single study (C reswell & Creswell, 2018). Researchers use mixed methods to understand contradict ions between quantitative and qualitative findings; assess complex interven tions; address complex social issues; explore diverse perspectives; uncover rel ationships; and, in multidisciplinary research, focus on a substantive field, such as childhood depression. Qualitative and quantitative research designs are complementary. However, while a quantitative study can include qualitative data, such as asking an ope n-ended question in a survey, it is not automatically considered to be mixed-methods be – cause the design sought to address the research questions from a quantit ative per – spective (how many, how much, etc.). Likewise, a qualitative study may gather quantitative data, such as demographics, but only to provide further ins ights into the qualitative analysis. The research problem and question drive the de cision to use a true mixed-methods approach and leverage the strengths of both qua ntita – tive and qualitative designs to provide a more in-depth understanding th an either would if used independently. If using only quantitative or qualitative designs would provide sufficient data, then mixed methods are unnecessary. Types of Research Designs Research problems and questions guide the selection of a research approa ch (qualitative, quantitative, or mixed methods) and, within each approac h, there are different types of inquiries, referred to as research designs (Cres well & Creswell, 2018). A research design provides specific direction for th e methods used in the conduct of the actual study. Additionally, studies can take the form of single research studies to create new data (primary research), summ arize and analyze existing data for an intervention or outcome of interest (s econdary research), or represent summaries of multiple studies. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 132 Single Research Studies The evidence-based practice (EBP) team will typically review evidence from single research studies or primary research. Primary research comprises data co llected to answer one or more research questions or hypotheses. Reviewers may also find secondary analyses that use data from primary studies to ask different q uestions. Single research studies fall into three broad categories: true experimen tal, quasi- experimental, and nonexperimental (observational). Table 6.1 outlines the quantitative research design, aim, distinctive fea tures, and types of study methods frequently used in social sciences. Table 6.1 Research Design, Aim, Distinctive Features, and Types of Study Methods Research Design Aim Features Type of Study Methods True Experimental Establish existence of a cause and effect relationship between an intervention and an outcome ■ Manipulation of a variable in the form of an intervention ■ Control group ■ Random assignment to the intervention or control group ■ Randomized controlled trial ■ Posttest-only with randomization ■ Pre- and posttest with randomization ■ Solomon 4 group Quasi- experimental Estimate the causal relationship between an intervention and an outcome without randomization ■ An intervention ■ Nonrandom assignment to an intervention group; may also lack a control group ■ Nonequivalent groups: not randomized ■ control (comparison) group posttest only ■ pretest–posttest ■ One group: not randomized ■ posttest only ■ pretest–posttest ■ Interrupted time-series Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 133 Non- experimental Measures one or more variables as they naturally occur without manipulation ■ May or may not have an intervention ■ No random assignment to a group ■ No control group ■ Descriptive ■ Correlational ■ Qualitative Univariate Answers a research question about one variable or describes one characteristic or attribute that varies from observation to observation ■ No attempt to relate variables to each other ■ Variables are observed as they naturally occur ■ Exploratory ■ Survey ■ Interview Source: Creswell & Creswell, 2018 True Experimental Designs (Level I Evidence) True experimental studies use the traditional scientific method: independent and dependent variables, pretest and posttest, and experimental and cont rol groups. One group (experimental) is exposed to an intervention; the ot her is not exposed to the intervention (the control group). This study design all ows for the highest level of confidence in establishing causal relationships betwe en two or more variables because the variables are observed under controlled condi tions (Polit & Beck, 2017). True experiments are defined by the use of randomization. The most commonly recognized true experimental method is the randomized controlled trial, which aims to reduce certain sources of bias when test ing the effectiveness of new treatments and drugs. However, other methods of true experimental designs that require randomization are listed in Table 6.1. The Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 134 Solomon 4 group is frequently used in psychology and sometimes used in s ocial sciences and medicine. It is used specifically to assess whether takin g a pretest in – fluences scores on a posttest. A true experimental study has three distinctive criteria: randomization, manipu – lation , and control . Randomization occurs when the researcher assigns subjects to a control or ex – perimental group arbitrarily, similar to the roll of dice. This process ensures that each potential subject who meets inclusion criteria has the same probabi lity of selection for the experiment. The goal is that people in the experimenta l group and the control group generally will be similar, except for the experimental inter – vention or treatment. This is important because subjects who take part i n an ex – periment serve as representatives of the population, and as little bias as possible should influence who does and does not receive the intervention. Manipulation occurs when the researcher implements an intervention with at least some of the subjects. In experimental research, some subjects (th e experi – mental group ) receive an intervention and other subjects do not (the control group ). The experimental intervention is the independent variable, or the action the researcher will take (e.g., applying low-level heat therapy) to tr y to change the dependent variable (e.g., the experience of low back pain). Control usually refers to the introduction of a control or comparison group, su ch as a group of subjects to which the experimental intervention is not applied. The goal is to compare the effect of no intervention on the dependent variab le in the control group against the experimental intervention’s effect on the dependent variable in the experimental group. Control groups can be achieved throu gh vari – ous approaches and include the use of placebos, varying doses of the int ervention between groups, or providing alternative interventions (Polit & Beck, 2 017). Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 135 Quasi-Experimental Designs (Level II Evidence) Quasi-experimental studies are similar to experimental studies in that they try to show that an intervention causes a particular outcome. Quasi-experime ntal studies always include manipulation of the independent variable (interv ention). They differ from true experimental studies because it is not always poss ible to randomize subjects, and they may or may not have a control group. For ex ample, an investigator can assign the intervention (manipulation) to one of t wo groups (e.g., two medical units); one unit volunteers to pilot a remote video fall reminder system (intervention group) and is compared to the other unit that con tinues delivering the standard of care (control group). Although the preexist ing units were not randomly assigned, they can be used to study the effectiveness of the remote video reminder system. In cases where a particular intervention is effective, withholding that intervention would be unethical. In the same vein, it may not be feasible to randomiz e pa – tients or geographic location, or it would not be practical to perform a study that requires more human, financial, or material resources than are availab le. Examples of types of important and frequently used quasi-experimental de signs that an EBP team may see during the course of their search include noneq uivalent control (comparison), one group posttest only, one group pretest–posttest, and interrupted time series design. The EBP team members should refer to a r esearch text when they encounter any unfamiliar study designs. Example: Experimental Randomized Controlled Trial Rahmani et al. (2020) conducted a randomized control trial to investig ate the impact of Johnson’s Behavioral System Model in the health of heart failure patients. They randomized 150 people to a control group and an intervent ion group. The intervention group received care based on findings from a b ehavioral subsystem assessment tool, and the control group received care based on their worst subsystem scores over a two-week period. The researchers fou nd that the intervention group showed significant improvement in six of t he eight subsystems over the control group. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 136 Nonexperimental Designs (Level III Evidence) When reviewing evidence related to healthcare questions, EBP teams will often find studies of naturally occurring phenomena (groups, treatmen ts, and individuals), situations, or descriptions of the relationship between t wo or more variables. These studies are nonexperimental because there is no interfe rence by the researcher. This means that there is no manipulation of the independent variable or random assignment of participants to a control or treatment group, or both. Additionally, the validity of measurements (e.g., physiologic values, survey tools), rather than the validity of effects (e.g., lung cancer is caused by smoking), is the focus of attention. Nonexperimental studies fall into three broad categories—descriptive, correlational, and qualitative univariate (Polit & Beck, 2017)—and can simultaneously be characterized from a time perspective. In retrospective studies, the outcome of the study has already occurred (or has not occurred in t he case of the controls) in each subject or group before they are asked to enro ll in the study. The investigator then collects data either from charts and records or by obtaining recall information from the subjects or groups. In contrast , for prospective studies, the outcome has not occurred at the time the study begins, and the investigator follows up with subjects or groups over a specifi c period to determine the occurrence of outcomes. In cross-sectional studies, researchers collect data from many different individuals at a single point in time a nd observe the variables of interest without influencing them. Longitudinal studies look at changes in the same subjects over a long period. Examples: Quasi-Experimental Studies Awoke et al. (2019) conducted a quasi-experimental study to evaluate th e impact of nurse-led heart failure patient education on knowledge, self-c are behaviors, and all cause 30-day hospital readmission. The study used a p retest and posttest experimental design on a convenience sample in two cardiac units. An evidence-based education program was developed based on guidelines from the American Colleges of Cardiology and American Heart Association. Participants were invited to complete two validated scales assessing hea rt failure knowledge and self-care. The researchers found a statistically s ignificant difference in knowledge and self-care behaviors. A significant improve ment in 30-day readmission was not found. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 137 Descriptive Studies Descriptive studies accurately and systematically describe a population, situation, or phenomenon as it naturally occurs. It answers what, where, when, and how questions but does not answer questions about statistical relationships between variables. There is no manipulation of variables and no attempt to deter mine that a particular intervention or characteristic causes a specific occurren ce to happen. Answers to descriptive research questions are objectively measured using statistics, and analysis is generally limited to measures of frequency (count, perc ent, ratios, proportions), central tendency (mean, median, mode), dispersion or va riation (range, variance, standard deviation), and position (percentile rank, quartile rank). A descriptive research question primarily quantifies a single variable but can also cover multiple variables within a single question. Common types of descriptive designs include descriptive comparative, descriptive correla tional, predictive correlational, and epidemiologic descriptive studies (preval ence and incidence). Table 6.2 outlines the purpose and uses of quantitative descriptive design study types. Table 6.2 Descriptive Study Type, Purpose, and Use Descriptive Study Type Purpose Use Comparative Determine similarities and difference or compare and contrast variables without manipulation. Account for differences and similarities across cases; judge if a certain method, intervention, or approach is superior to another. Descriptive correlational Describe two variables and the relationship (strength and magnitude) that occurs naturally between them. Find out if and how a change in one variable is related to a change in the other variable(s). Incidence (epidemiologic descriptive) Determine the occurrence of new cases of a specific disease or condition in a population over a specified period of time. Understand the frequency of new cases for disease development. continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 138 Descriptive Study Type Purpose Use Predictive correlational Predict the variance of one or more variables based on the variance of another variable(s). Examine the relationship between a predictor (independent variable) and an outcome/criterion variable. Prevalence (epidemiologic descriptive) Determine the proportion of a population that has a particular condition at a specific point in time. Compare prevalence of disease in different populations; examine trends in disease severity over time. Correlational Studies Correlational studies measure a relationship between two variables without the researcher controlling either of them. These studies aim to find out w hether there is: ■ Positive correlation: One variable changes in the same direction as the other variable direction. ■ Negative correlation: Variables change in opposite directions, one increases, and the other decreases. ■ Zero correlation: There is no relationship between the variables. Table 6.3 outlines common types of correlational studies, such as case-co ntrol, cohort, and natural experiments. Table 6.2 Descriptive Study Type, Purpose, and Use (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 139 Table 6.3 Correlational Study Type, Purpose, and Use Correlational Study Type Purpose Use Case-control Examine possible relationships between exposure and disease occurrence by comparing the frequency of exposure of the group with the outcome (cases) to a group without (controls). Can be either retrospective or prospective. Identifies factors that may contribute to a medical condition. Often used when the outcome is rare. Cohort Examine whether the risk of disease was different between exposed and nonexposed patients. Can be either retrospective or prospective. Investigates the causes of disease to establish links between risk factors and health outcomes. Natural experiments Study a naturally occurring situation and its effect on groups with different levels of exposure to a supposed causal factor. Beneficial when there has been a clearly defined exposure involving a well-defined group and the absence of exposure in a similar group. Univariate Studies Univariate studies, also referred to as single-variable research, use exploratory or survey methods and aim to describe the frequency of a behavior or an occ urrence. Univariate descriptive studies summarize or describe one variable, rathe r than examine a relationship between the variables (Polit & Beck, 2017). Exp loratory and survey designs are common in nursing and healthcare. When little kno wledge about the phenomenon of interest exists, these designs offer the greates t degree of flexibility. Though new information is learned, the direction of the exploration may change. With exploratory designs, the investigator does not know enough about a phenomenon to identify variables of interest completely. Researchers Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 140 observe variables as they happen; there is no researcher control. When i nvestigators know enough about a particular phenomenon and can identify specific va riables of interest, a descriptive survey design more fully describes the phenom enon. Questionnaire (survey) or interview techniques assess the variables of interest. Qualitative Research Designs Qualitative research designs seek to discover the whys and hows of a phenomenon of interest in a written format as opposed to numerical. Types of qualitative studies (sometimes referred to as traditions) include ethnography, grounded theory, phenomenology, narrative inquiry, case study, and basic qualitative descriptive. With the exception of basic qualitative descriptive, each study type adhe res to a specific method for collecting and analyzing data; each methodology is based upon the researcher’s worldview that consists of beliefs that guide decisions or behaviors. Table 6.4 details qualitative study types. Table 6.4 Qualitative Study Type, Purpose, Uses Type Purpose Use Ethnography Study of people in their own environment to understand cultural rules Gain insights into how people interact with things in their natural environment. Grounded theory Examine the basic social and psychological problems/concerns that characterize real-world phenomena Used where very little is known about the topic to generate data to develop an explanation of why a course of action evolved the way it did. Phenomenology Explore experience as people live it rather than as they conceptualize it Understand the lived experience of a person and its meaning. Narrative inquiry Reveal the meanings of individuals’ experiences combined with the researcher’s perspective in a collaborative and narrative chronology Understand the way people create meaning in their lives. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 141 Case study Describe the characteristics of a specific subject (such as a person, group, event, or organization) to gather detailed data to identify the characteristics of a narrowly defined subject Gain concrete, contextual, in- depth knowledge about an unusual or interesting case that challenges assumptions, adds complexity, or reveals something new about a specific real-world subject. Basic qualitative (also referred to as generic or interpretive descriptive) Create knowledge through subjective analysis of participants in a naturalistic setting by incorporating the strengths of different qualitative designs without adhering to the philosophical assumptions inherent in those designs Problem or phenomenon of interest is unsuitable for, or cannot be adapted to, the traditional qualitative designs. Systematic Reviews: Summaries of Multiple Research Studies Summaries of multiple research studies are one type of evidence synthesi s that generates an exhaustive summary of current evidence relevant to a resear ch question. Often referred to as systematic reviews , they use explicit methods to search the scientific evidence, summarize critically appraised and rel evant primary research, and extract and analyze data from the studies that are include d in the review. To minimize bias, a group of experts, rather than individuals, applies th ese standardized methods to the review process. A key requirement of systema tic reviews is the transparency of methods to ensure that rationale, assumpt ions, and processes are open to scrutiny and can be replicated or updated. A systematic review does not create new knowledge; rather, it provides a concise and relatively unbiased synthesis of the research evidence for a topic of interest (Ar omataris & Munn, 2020). There are at least 14 types of systematic review study designs (Aromata ris & Munn, 2020) with specific critical appraisal checklists by specific study design type (Grant & Booth, 2009). Healthcare summaries of multiple studies most o ften use meta-analyses with quantitative data and meta-syntheses with qualitative data. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 142 Systematic Review With Meta-Analysis Meta-analyses are systematic reviews of quantitative research studies th at statically combine the results of multiple studies that have a common in tervention (independent variables) and outcomes (dependent variables) to create new summary statistics. Meta-analysis offers the advantage of objectivity be cause the study reviewers’ decisions are explicit and integrate data from all i ncluded studies. By combining results across several smaller studies, the researcher can increase the power, or the probability of detecting a true relationship between the interv ention and the outcomes of the intervention (Polit & Beck, 2017). For each of the primary studies, the researcher develops a common metric called effect size (ES), a measure of the strength of the relationship between two variables. This summary statistic combines and averages effect sizes acr oss the included studies. Cohen’s (1988) methodology for determining effect sizes defines the strength of correlation ratings as trivial (ES = 0.01–0.09), lo w to moderate (0.10–0.29), moderate to substantial (0.30–0.49), substantial to very strong (0.50–0.69), very strong (0.70–0.89), and almost perfect (0.9 0–0.99). Researchers display the results of meta-analysis of the included individ ual studies in a forest plot graph. A forest plot shows the variation between the st udies and an estimate of the overall result of all the studies together. This is usually accompanied by a table listing references (author and date) of the stu dies included Systematic Reviews Versus Narrative Reviews Systematic reviews differ from traditional narrative literature reviews. Narrative reviews often contain references to research studies but do not critical ly appraise, evaluate, and summarize the relative merits of the included st udies. True systematic reviews address both the strengths and the limitations of each study included in the review. Readers should not differentiate between a systematic review and a narrative literature review based solely on the article’s title. At times, the title will state that the article presents a litera ture review when it is in fact a systematic review or state that the article is a sy stematic review when it is a literature review. EBP teams generally consider themselves lucky when they uncover well-executed systematic reviews that include summative research techniques that apply to the practice question of int erest. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 143 in the meta-analysis and the statistical results (Centre for Evidence-B ased Intervention, n.d.). Example: Meta-Analysis Meserve and colleagues (2021) conducted a meta-analysis of randomized controlled trials, cohort studies, and case series to evaluate the risks and outcomes of adverse events in patients with preexisting inflammatory b owel diseases treated with immune checkpoint inhibitors. They identified 12 studies reporting the impact of immune checkpoint inhibitors in 193 patients wit h inflammatory bowel disease and calculated pooled rates (with 95% confi dence intervals [CI]) and examined risk factors associated with adverse outco mes through qualitative synthesis of individual studies. Approximately 40% o f patients with preexisting inflammatory bowel diseases experienced rela pse with immune checkpoint inhibitors, with most relapsing patients requirin g corticosteroids and one-third requiring biologics. Systematic Review With Meta-Synthesis Meta-synthesis is the qualitative counterpart to meta-analysis. It involves interpreting data from multiple sources to produce a high-level narrativ e rather than aggregating data or producing a summary statistic. Meta-synthesis s upports developing a broader interpretation than can be gained from a single pri mary qualitative study by combing the results from several qualitative studie s to arrive at a deeper understanding of the phenomenon under review (Polit & Beck, 2017). Example: Meta-Synthesis Danielis et al. (2020) conducted a meta-synthesis and meta-summary to understand the physical and emotional experiences of adult intensive car e unit (ICU) patients who receive mechanical ventilation. They searched four electronic databases and used the Critical Appraisal Skills Programme ch ecklist to evaluate articles on their methodological quality. Nine studies met the criteria. The researchers identified twenty-four codes across eleven c ategories that indicated a need for improvements in clinical care, education, and policy to address this populations’ feelings associated with fear, inability to communicate, and “feeling supervised.” Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 144 Sources of Systematic Reviews The Institute of Medicine (2011) appointed an expert committee to esta blish methodological standards for developing and reporting of all types of sy stematic reviews. The Agency for Healthcare Research and Quality (AHRQ), the le ad federal agency charged with improving America’s healthcare system’s safety and quality, awards five-year contracts to North American institutions to serve as Evidence-Based Practice Centers (EPCs). EPCs review scientific li terature on clinical, behavioral, organizational, and financial topics to produce evidence reports and technology assessments (AHRQ, 2016). Additionally, EPCs conduct research on systematic review methodology. Research designs for conducting summaries of multiple studies include sy stem – atic reviews, meta-analysis (quantitative data), meta-synthesis (qual itative), and mixed methods (both quantitative and qualitative data). See Table 6.5 for nation – al and international organizations that generate summaries of multiple s tudies. Table 6.5 Organizations That Generate Summaries of Multiple Studies Organization Description Agency for Healthcare Research and Quality (AHRQ) AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews The AHRQ Effective Health Care Program has several tools and resources for consumers, clinicians, policymakers, and others to make informed healthcare decisions.   The Campbell Collaboration  The Campbell Collaboration is an international research network that produces systematic reviews of the effects of social interventions.   Centre for Reviews and Dissemination (CRD) The Centre for Reviews and Dissemination provides research- based information about the effects of health and social care interventions and provides guidance on the undertaking of systematic reviews. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 145 Cochrane Collaboration  The Cochrane Collaboration is an international organization that helps prepare and maintain the results of systematic reviews of healthcare interventions. Systematic reviews are disseminated through the online Cochrane Library, which is accessible through this guide and under databases on the NIH Library website. JBI JBI (formerly the Joanna Briggs Institute) is an international not- for-profit research and development Centre within the Faculty of Health Sciences and Medical at the University of Adelaide, South Australia that produces systematic reviews. JBI also provides comprehensive systematic review training. Mixed-Methods Studies As with quantitative and qualitative approaches, there are different des igns within mixed methods (see Table 6.6). The most common mixed-method designs are convergent parallel, explanatory sequential, exploratory sequential, and multiphasic (Creswell & Plano Clark, 2018). Table 6.6 Mixed-Methods Design, Procedure, and Use Design Procedure Use Convergent parallel Concurrently conducts quantitative and qualitative elements in the same phase of the research process, weighs the methods equally, analyzes the two components independently, and interprets the results together Validate quantitative scales and form a more complete understanding of a research topic Explanatory sequential Sequential design with quantitative data collected in the initial phase, followed by qualitative data Used when quantitative findings are explained and interpreted with the assistance of qualitative data continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 146 Design Procedure Use Exploratory sequential designs Sequential design with qualitative data collected in the initial phase, followed by quantitative data Qualitative results need to be tested or generalized or for theory development or instrument development Multiphasic Combines the concurrent or sequential collection of quantitative and qualitative data sets over multiple phases of a study Useful in comprehensive program evaluations by addressing a set of incremental research questions focused on a central objective Determining the Level of Research Evidence The JHEBP model encompasses quantitative and qualitative studies, primar y studies, and summaries of multiple studies within three levels of eviden ce. The level of research evidence (true experimental, quasi-experimental, nonexperimental) is an objective determination based on the study desig n meeting the scientific evidence design requirements—manipulation of a variable in the form of an intervention, a control group, and random assignment t o the intervention or control group. Table 6.7 identifies the type of research studies in each of the three levels of scientific evidence. The Research Appra isal Tool (Appendix E) provides specific criteria and decision points for dete rmining the level of research evidence. Table 6.7 Rating the Level of Research Evidence Level Type of Evidence I A true experimental study, randomized controlled trial (RCT), or systematic review of RCTs, with or without meta-analysis Table 6.6 Mixed-Methods Design, Procedure, and Use (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 147 II A quasi-experimental study or systematic review of a combination of RCTs and quasi-experimental studies, or quasi-experimental studies only, with or without meta-analysis III A quantitative nonexperimental study; systematic review of a combination of RCTs, quasi-experimental, and nonexperimental studies, or nonexperimental studies only; or qualitative study or systematic review of qualitative s tudies, with or without a meta-synthesis Appraising the Quality of Research Evidence After the EBP team has determined the level of research evidence, the te am evaluates the quality of the evidence with the corresponding expectation s of the chosen study design. Individual elements to be evaluated for each pi ece of evidence will depend on the type of evidence but can include the quality (validity and reliability) of the researchers’ measurements, statistical fin dings, and quality of reporting. Quality of Measurement Findings of research studies are only as good as the tools used to gathe r the data. Understanding and evaluating the psychometric properties of a given inst rument, such as validity and reliability, allow for an in-depth understanding of the quality of the measurement. Validity Validity refers to the credibility of the research—the extent to which the re search measures what it claims to measure. The validity of research is importan t because if the study does not measure what it intended, the results will not eff ectively answer the aim of the research. There are several ways to ensure validit y, including expert review, Delphi studies, comparison with established tools, factor analysis, item response theory, and correlation tests (expressed as a correlation coefficient). There are two aspects of validity to measure: internal and external. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 148 Internal validity is the degree to which observed changes in the dependent variable are due to the experimental treatment or intervention rather th an other possible causes. An EBP team should question whether there are competing ex – planations for the observed results. Measures of internal validity inclu de content validity (the extent to which a multi-item tool reflects the full extent of t he con – struct being measured), construct validity (how well an instrument truly mea – sures the concept of interest), and cross-cultural validity (how well a translated or culturally adapted tool performs relative to the original instrument) (Polit & Beck, 2017). External validity refers to the likelihood that conclusions about research findings are generalizable to other settings or samples. Errors of measurement th at affect validity can be systematic or constant. External validity is a signifi cant concern with EBP when translating research into the real world or from one popul ation/ setting to another. An EBP team should question the extent to which study con – clusions may reasonably hold true for their particular patient populatio n and setting. Do the investigators state the participation rates of subjects and settings? Do they explain the intended target audience for the intervention or tre atment? How representative is the sample of the population of interest? Ensuring the study participants’ representativeness and replicating the study in m ultiple sites that differ in dimensions such as size, setting, and staff skill set imp rove external validity. Experimental studies are high in internal validity because they are stru ctured and control for extraneous variables. However, because of this, the generalizability of the results (external validity) may be limited. In contrast, nonexp erimental and observational studies may be high in generalizability because the st udies are conducted in real-world settings but are low on internal validity becaus e of the inability to control variables that may affect the results. Bias plays a large role in the potential validity of research findings . In the context of research, bias can present as preferences for, or prejudices against, particular groups or concepts. Bias occurs in all research, at any stage, and is di fficult to eliminate. Table 6.8 outlines the types of bias. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 149 Table 6.8 Types of Research Bias, Descriptions, and Mitigation Techniques Type of Bias Description Mitigation Techniques Investigator bias Researcher unknowingly influences study participants’ responses Participants may pick up on subtle details in survey questions or their interaction with a study team member and conclude that they should respond a certain way. Standardize all interactions with participants through interview scripts, and blind the collection or analysis of data. Hawthorne effect Changes in participants’ behavior because they are aware that others are observing them. Evaluate the value of direct observation over other data collection methods. Attrition bias Loss of participants during a study and the effect on representativeness within the sample. This can affect results, as the participants who remain in a study may collectively possess different characteristics than those who drop out. Limit burden on participants while maximizing opportunities for engagement, communicating effectively and efficiently. Selection bias Nonrandom selection of samples. This can include allowing participants to self-select treatment options or assigning participants based upon specific demographics. When possible, use a random sample. If not possible, apply rigorous inclusion and exclusion criteria to ensure recruitment occurs within the appropriate population while avoiding confounding factors. Use a large sample size. Reliability Reliability refers to the consistency of a set of measurements or an instrument used to measure a construct. For example, a patient scale is off by 5 po unds. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 150 When weighing the patient three times, the scale reads 137 every time (reliability). However, the weight is not the patient’s true weight because the scale is not recording correctly (validity). Reliability refers, in essence, to the repeat – ability of a measurement. Errors of measurement that affect reliability are ran – dom. For example, variation in measurements may exist when nurses use pa tient care equipment such as blood pressure cuffs or glucometers. Table 6.9 displays three methods used to measure reliability: internal consistency reliabil ity, test- retest reliability, and interrater reliability. Evaluating Statistical Findings Most research evidence will include reports of descriptive and analytic statistics of their study findings. The EBP team must understand the general conc epts of common data analysis techniques to evaluate the meaning of study findi ngs. Measures of Central Tendency Measures of central tendency (mean, median, and mode) are summary statistics that describe a set of data by identifying the central position within t hat set of data. The most well-known measure of central tendency is the mean (or average), which is used with both discrete data (based on counts) and continuous data (infinite number of values divided along a specified continuum) ( Polit & Beck, 2017). Although a good measure of central tendency in normal distributi ons, the mean is misleading in skewed (asymmetric) distributions and extreme sc ores. The median, the number that lies at the midpoint of a distribution of values, is le ss sensitive to extreme scores and is therefore of greater use in skewed di stributions. The mode is the most frequently occurring value and is the only measure of central tendency used with categorical data (data divided into groups) . Standard deviation is the measure of scattering of a set of data from its mean. The more spread out a data distribution is, the greater its standard deviation. S tandard deviation cannot be negative. A standard deviation close to 0 indicates that the data points tend to be close to the mean. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 151 Table 6.9 Reliability Definitions and Statistical Techniques Type of Reliability Definition Statistical Techniques Internal consistency Whether a set of items in an instrument or subscale that propose to measure the same construct produce similar scores. Cronbach’s alpha ( α) is a coefficient of reliability (or consistency). Cronbach alpha values of 0.7 or higher indicate acceptable internal consistency. Test-retest reliability Degree to which scores are consistent over time. Indicates score variation that occurs from test session to test session that is a result of errors of measurement. Pearson’s r correlation coefficient (expresses the strength of a relationship between variables ranging from –1.00, a perfect negative correlation, to +1.00, a perfect positive relationship). Scatter plot data. Interrater reliability Extent to which two or more raters, observers, coders, examiners agree. Techniques depend on what is actually being measured: ■ Correlational coefficients are used to measure consistency between raters ■ Percent agreement measures agreement between raters Interrater reliability (cont.) And the type of data: ■ Nominal data: also called categorical data; variables that have no value (e.g., gender, employment status) ■ Ordinal data: categorical data where order is important (e.g., Likert scale measuring level of happiness) ■ Interval data: numeric scales with a specified order and the exact differences between the values (e.g., blood pressure reading) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 152 Measures of Statistical Significance Statistical significance indicates whether findings reflect an actual association or difference between the variables/groups or are due to chance alone. The classic measure of statistical significance, the p-value, is a probability range from 0 to 1. The smaller the p-value (the closer it is to 0), the more likely the result is statistically significant. Factors that affect the p-value include sam ple size and the magnitude of the difference between groups (effect size) (Thiese et a l., 2016). For example, if the sample size is large enough, the results are more li kely to show a significant p-value, even if the effect size is small or clinic ally insignifi – cant, but there is a true difference in the groups. In healthcare litera ture, the p-value for determining statistical significance is generally set at p < 0.05. Though p-values indicate statistical significance (i.e., the results are not due to chance), healthcare research results are increasingly reporting effect sizes and confidence intervals to more fully interpret results and guide dec isions for translation. Effective sizes are the amount of difference between two groups. A positive effect size indicates a positive relationship—as one variabl e increases, the second variable increases. A negative effect size signifies a nega tive relation - ship, where as one variable increases or decreases, the second variable moves in the opposite direction. Confidence intervals (CI) are a measure of precision and are expressed as a range of values (upper limit and lower limit) where a given measure actually lies based on a predetermined probability. The standard 95% CI means an investigator can be 95% confident that the actual values i n a given population fall within the upper and lower limits of the range of values . Quality of Reporting Regardless of the quality of the conduct of a research investigation, th e implications of that study cannot be adequately determined if the resear chers do not provide a complete and detailed report. The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network (https://www.equator- network.org) is a repository of reporting guidelines organized by study type. These guidelines provide a road map for the required steps to conduct an d report out a robust study. While ideally researchers are using standard reporting Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 153 guidelines, the degree to which journals demand adherence to these stand ards varies. Regardless of the type of study, classic elements of published research include title, abstract, introduction, methods, results, discussion, and conclusion (Lunsford & Lunsford, 1996). Title Ideally, the title should be informative and help the reader understand the typ e of study being reported. A well-chosen title states what the author did, to whom it was done, and how it was done. Consider the title “Improving transiti ons in care for children with complex and medically fragile needs: a mixed-methods s tudy” (Curran et al., 2020). The reader is immediately apprised of what was done (improve transitions in care), to whom it was done (children with com plex and medically fragile needs), and how it was done (a mixed-methods study) . Abstract The abstract is often located after the title and author section and gra phically set apart by the use of a box, shading, or italics. The abstract is a brief description of the problem, methods, and findings of the study (Polit & Beck, 2017) . Introduction The introduction contains the background and a problem statement that te lls why the investigators have chosen to conduct the study. The best way to present the background is by reporting on current literature, and the author sho uld identify the knowledge gap between what is known and what the study seek s to find out (or answer). A clear, direct statement of purpose and a statement of expected results or hypotheses should be included. Methods This section describes how a study is conducted (study procedures) in sufficient detail so that readers can replicate the study, including the study design, population with inclusion and exclusion criteria, recruitment, consent, a Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 154 description of the intervention, and how data was collected and analyzed . If instrumentation was used, the methods section should include the validit y and reliability of the tools. Authors should also include an acknowledgment of ethical review for research studies involving human subjects. The methods should read similar to a manual of the study design. Results Study results list the findings of the data analysis and should not co ntain commentary. Give particular attention to figures and tables, which are the heart of most papers. Look to see whether results report statistical versus cl inical significance, and look up unfamiliar terminology, symbols, or logic. Discussion The discussion should align with the introduction and results and state the implications of the findings. This section explores the research fin dings and meaning given to the results, including how they compare to similar stud ies. Authors should also identify the study’s main weaknesses or limitations and identify the actions taken to minimize them. Conclusion The conclusion should contain a brief restatement of the experimental results and implications of the study (Hall, 2012). If the conclusion does not hav e a separate header, it usually falls at the end of the discussion section. The Overall Report The parts of the research article should be highly interconnected (but not overlap). The researcher needs to ensure that any hypotheses flow dir ectly from the review of literature, and results support arguments or interpre tations presented in the discussion and conclusion sections. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 155 Determining the Quality of Evidence Rating the quality of research evidence includes considerations of facto rs such as sample size (power of study to detect true differences), extent to which you can generalize the findings from a sample to an entire population, and validity (indicates findings truly represent the phenomenon you are claiming t o measure). In contrast to the objective approach to determining the level, grading the quality of evidence is subjective and requires critical thinking by the EBP team to make a determination (see Table 6.10). Table 6.10 Quality Rating for Research Evidence Grade Research Evidence A: High Consistent, generalizable results; sufficient sample size for study de sign; adequate control; definitive conclusions; recommendations consistent w ith the study’s findings and include thorough reference to scientific evidence B: Good Reasonably consistent results; sufficient sample size for the study de sign; some control; fairly definitive conclusions; recommendations reasonabl y consistent with the study’s findings and include some reference to scientific evidence C: Low Little evidence with inconsistent results; insufficient sample size fo r the study design; conclusions cannot be drawn Experimental Studies (Level I Evidence) True experiments have a high degree of internal validity because manipula tion and random assignment enables researchers to rule out most alternative explanations of results (Polit & Beck, 2017). Internal validity, however, decreases the generalizability of the results (external validity). To uncover potential threats to external validity, the EBP team may pose questions such as, “How confident are we that the study findings can transfer from the sampl e to the entire population? Are the study conditions as close as possible to real -world situations? Did subjects have inherent differences even before manipulat ion of the independent variable (selection bias)? Are participants responding in a certain Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 156 way because they know the researcher is observing them (the Hawthorne e ffect)? Are there researcher behaviors or characteristics that may influence t he subject’s responses (investigator bias)? In multi-institutional studies, are the re variations in how study coordinators at various sites managed the trial? Subject mortality and different dropout rates between experimental and c on - trol groups may affect the adequacy of the sample size. Additional items the EBP team may want to assess related to reasons for dropout of subjects i nclude whether the experimental treatment was painful or time-consuming and whe ther participants remaining in the study differ from those who dropped out. I t is im - portant to assess the nature of possible biases that may affect randomiz ation. Assess for selection biases by comparing groups on pretest data (Polit & Beck, 2017). If there are no pretest measures, compare groups on demographic and dis - ease variables such as age, health status, and ethnicity. If there are multiple data collection points, it is important to assess attrition biases by compari ng those who did or did not complete the intervention. EBP teams should carefully ana - lyze how the researchers address possible sources of bias. Quasi-Experimental Studies (Level II Evidence) As with true experimental studies, threats to generalizability (externa l validity) for quasi-experimental studies include maturation, testing, and instrume ntation, with the additional threats of history and selection (Polit & Beck, 201 7). The occurrence of external events during the study (threat of history) can affect a subject’s response to the investigational intervention or treatment. Additionall y, with nonrandomized groups, preexisting differences between the groups ca n affect the outcome. Questions the EBP team may pose to uncover potential threats to internal validity include, “Did some event occur during th e study that may have influenced the results of the study? Are there processes occurring within subjects over the course of the study because of the passage of t ime (maturation) rather than from the experimental intervention? Could the pretest have influenced the subject’s performance on the posttest? Were the measurement instruments and procedures the same for both points of data collection?” Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 157 In terms of external validity, threats associated with sampling design, such as pa - tient selection and characteristics of nonrandomized patients, affect th e general findings. External validity improves if the researcher uses random sel ection of subjects, even if random assignment to groups is not possible. Nonexperimental and Qualitative Studies (Level III Evidence) The evidence gained from well-designed nonexperimental and qualitative s tudies is the lowest level in the research hierarchy (Level III). When looking for potential threats to external validity in quantitative non- experimental studies, the EBP team can pose the questions described unde r ex - perimental and quasi-experimental studies. In addition, the team may ask further questions such as, “Did the researcher attempt to control for extrane ous vari - ables with the use of careful subject-selection criteria? Did the resear cher attempt to minimize the potential for socially acceptable responses by the subje ct? Did the study rely on documentation as the source of data? In methodological studies (developing, testing, and evaluating research instruments and methods) , were the test subjects selected from the population for which the test will be us ed? Was the survey response rate high enough to generalize findings to the target population? For historical research studies, are the data authentic and genuine?” Qualitative studies offer many challenges with respect to the question o f valid - ity. There are several suggested ways to determine validity, or rigor, in qualitative research. Four common approaches to establish rigor (Saumure & Given, 2 012) are: ■ Transparency: How clear the research process has been explained ■ Credibility: The extent to which data are representative ■ Dependability: Other researchers would draw the same or similar conclusions when looking at the data ■ Reflexivity: How the researcher has reported how they were involved in the research and may have influenced the study results Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 158 Issues of rigor in qualitative research are complex, so the EBP team sho uld appraise how well the researchers discuss how they determined validity f or the particular study. Systematic Reviews (Level I or II Evidence) Teams should evaluate systematic reviews for the rigor and transparency t hey display in their search strategies, appraisal methods, and results. Syst ematic reviews should follow well-established reporting guidelines (Moher et a l., 2009), in most cases, the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA). This includes a reproducible search strategy, a flow diagram of the literature screening, clear data extraction methodology and repor ting, and methods used to evaluate the strength of the literature. Authors should ensure all conclusions are based on a critical evaluation of results. Systematic Review With Meta-Analysis The strength (level and quality) of the evidence on which recommendati ons are made within a meta-analytic study depends on the design and quality of s tudies included in the meta-analysis as well as the design of the meta-analysis itself. Factors to consider include sampling criteria of the primary studies inc luded in the analysis, quality of the primary studies, and variation in outcomes between studies. To determine the level, the EBP team looks at the types of research desig ns included in the meta-analysis. Meta-analyses containing only randomized con - trolled trials are Level I evidence. Some meta-analyses include data fro m quasi- experimental or nonexperimental studies, and the level of evidence would be at a level commensurate with the lowest level of research design included (e .g., if the meta-analysis included experimental and quasi-experimental studies, the meta- analysis would be Level II). To determine the quality of the article, first the team should look at the strength of the individual studies included. For an EBP team to evaluate evidence obtained Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 159 from a meta-analysis, the meta-analysis report must be detailed enough f or the reader to understand the studies included. Second, the team should asses s the quality of the meta-analysis itself. The discussion section should inclu de an over - all summary of the findings, the magnitude of the effect, the number o f studies, and the combined sample size. The discussion should present the overall quality of the evidence and consistency of findings (Polit & Beck, 2017). Th e discussion should also include a recommendation for future research to improve the evi - dence base. Systematic Review With Meta-Syntheses (Level III Evidence) Evaluating and synthesizing qualitative research presents many challenge s. It is not surprising that EBP teams may feel at a loss in assessing the qualit y of meta- synthesis. Approaching these reviews from a broad perspective enables th e team to look for quality indicators that both quantitative and qualitative su mmative research techniques have in common. The following should be noted in meta-synthesis reports: explicit search strate - gies, inclusion and exclusion criteria, methods used to determine study quality, methodological details for all included studies, and the conduct of the meta-syn - thesis itself. Similar to other summative approaches, a meta-synthesis should be undertaken by a team of experts since the application of multiple perspectives to the processes of study ap - praisal, coding, charting, mapping, and interpretation may result in additional insights, and thus in a more complete interpretation of the subject of the review (Jones, 2004, p. 277). EBP teams need to keep in mind that judgments related to study strengths and weaknesses as well as to the suitability of recommendations for the targ et popu - lation are both context-specific and dependent on the question asked. Some con - ditions or circumstances, such as clinical setting or time of day, are relevant to determining a particular recommended intervention’s applicability. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 160 A Practical Tool for Appraising Research Evidence The Research Evidence Appraisal Tool (see Appendix E) gauges the level and quality of research evidence. The tool contains questions to guide the t eam in determining the level and the quality of evidence of the primary studies included in the review. Strength (level and quality) is higher with evidence from at least one well-designed (quality), randomized controlled trial (RCT) (Lev el I) than from at least one well-designed quasi-experimental (Level II), nonexpe rimental and qualitative (Level III) study. After determining the level, the tool contains additional questions specific to the study methods and execution to de termine the quality of the research. Recommendations for Interprofessional Leaders Professional standards have long held that clinicians need to integrate the best available evidence, including research findings, into practice an d practice decisions. This is the primary way to use new knowledge gained from rese arch. Research articles can be intimidating to novice and expert nurses alike. Leaders can best support EBP by providing clinicians with the resources to appra ise research evidence. It is highly recommended that they make available res earch texts, mentors, or experts to assist teams to become competent consumers of research. Only through continuous learning can clinicians gain the confi dence needed to incorporate evidence gleaned from research into individual pat ients’ day-to-day care. Summary This chapter arms EBP teams with practical information to guide the appr aisal of research evidence, a task that is often difficult for nonresearcher s. It presents an overview of the various types of research evidence, including attenti on to individual research studies and summaries of multiple research studies. Strategies and tips for reading research reports guide team members on how to appra ise the strength (level and quality) of research evidence. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 161 References Agency for Healthcare Research and Quality. (2016). EPC evidence-based reports . Content last reviewed March, 2021. http://www.ahrq.gov/research/findings/evidence-based-reports/index.html Aromataris, E., & Munn, Z. (2020). JBI systematic reviews. In E. Aroma taris & Z. Munn (Eds.), JBI manual for evidence synthesis (Chapter 1). JBI. https://synthesismanual.jbi.global Awoke, M. S., Baptiste, D. L., Davidson, P., Roberts, A., & Dennison-Himmelfarb, C. (2019). A quasi-experimental study examining a nurse-led education program to impr ove knowledge, self- care, and reduce readmission for individuals with heart failure. Contemporary Nurse , 55(1), 15–26. https://doi.org/10.1080/10376178.2019.1568198 Centre for Evidence-Based Intervention. (n.d.). https://www.spi.ox.ac.uk/what-is-good-evidence#/ Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Academic Press. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publishing. Creswell, J. W., & Plano Clarke, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publishing. Curran, J. A., Breneol, S., & Vine, J. (2020). Improving transitions in care for children with complex and medically fragile needs: A mixed methods study. BMC Pediatrics , 20(1), 1–14. https://doi.org/10.1186/s12887-020-02117-6 Danielis, M., Povoli, A., Mattiussi, E., & Palese, A. (2020). Understa nding patients’ experiences of being mechanically ventilated in the Intensive Care Unit: Findings fr om a meta‐synthesis and meta‐summary. Journal of Clinical Nursing , 29(13–14), 2107–2124. https://doi.org/10.1111/ jocn.15259 Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis o f 14 review types and associated methodologies. Health Information and Libraries Journal , 26(2), 91–108. https://doi. org/10.1111/j.1471-1842.2009.00848.x Hall, G. M. (Ed.). (2012). How to write a paper. John Wiley & Sons. Institute of Medicine. (2011). Finding what works in health care standards for systematic reviews. National Academy of Sciences. Jones, M. L. (2004). Application of systematic review methods to quali tative research: Practical issues. Journal of Advanced Nursing , 48(3), 271–278. https://doi.org/10.1111/j.1365- 2648.2004.03196.x Lunsford, T. R., & Lunsford, B. R. (1996). Research forum: How to critically read a journal research article. Journal of Prosthetics and Orthotics, 8 (1), 24–31. Meserve, J., Facciorusso, A., Holmer, A. K., Annese, V., Sanborn, W., & Singh, S. (2021). Systematic review with meta-analysis: Safety and tolerability of immune checkpoint inhibitors in patients with pre-existing inflammatory bowel diseases. Alimentary Pharmacology & Therapeutics , 53(3), 374–382. Moher, D., Liberati, A., Tetzlaff, J., & Altman, D., G. for the PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA sta tement. BMJ , 339 , b2535. https://doi.org/10.1136/bmj.b2535 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 6 Evidence Appraisal: Research 162 Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Lippincott Williams & Wilkens. Rahmani, B., Aghebati, N., Esmaily, H., & Florczak, K. L. (2020). Nurse-led care program with patients with heart failure using Johnson’s Behavioral System Model: A randomized controlled trial. Nursing Science Quarterly , 33 (3), 204–214. https://doi.org/10.1177/0894318420932102 Saumure, K., & Given, L. M. (2012). Rigor in qualitative research. In L. M. Given (Ed.). The SAGE encyclopedia of qualitative research methods, 795–796 . SAGE Publishing. Thiese, M. S., Ronna, B., & Ott, U. (2016). P value interpretations an d considerations. Journal of Thoracic Disease , (9), E928–E931. https://doi.org/10.21037/jtd.2016.08.16 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:22. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof A distinguishing feature and strength of EBP is the inclusion of mul- tiple evidence sources. In addition to research evidence, clinicians can draw from a range of nonresearch evidence to inform their practice. Such evidence includes personal, aesthetic, and ethical ways of know- ing (Carper, 1978)—for example, the expertise, experience, and val- ues of individual practitioners, patients, and patients’ families. In this chapter, nonresearch evidence is divided into summaries of evidence (clinical practice guidelines, consensus or position statements, litera - ture reviews); organizational experience (quality improvement and financial data); expert opinion (commentary or opinion, case reports ); community standards; clinician experience; and consumer prefer- ences. This chapter: ■ Describes types of nonresearch evidence ■ Explains strategies for evaluating such evidence ■ Recommends approaches for building clinicians’ capacity to appraise nonresearch evidence to inform their practice 7 Evidence Appraisal: Nonresearch Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 164 Summaries of Research Evidence Summaries of research evidence such as clinical practice guidelines, con sensus or position statements, integrative reviews, and literature reviews are excellent sources of information relevant to practice questions. These forms of ev idence review and summarize all research, not just experimental studies. They a re not themselves classified as research evidence because they are often not comprehensive and may not include an appraisal of study quality. Clinical Practice Guidelines and Consensus/Position Statements (Level I V Evidence) Clinical practice guidelines (CPGs), as defined by the Institute of Medicine (IOM) in 2011, are statements that include recommendations intended to optimiz e patient care that are informed by a systematic review of evidence and an assess - ment of the benefits and harms of alternative care (IOM, 2011). CPGs are tools designed to provide structured guidance about evidence-based care, which can decrease variability in healthcare delivery, improving patient outcomes (Abrahamson et al., 2012). A key aspect of developing a valuable and trusted guideline is creation by a guideline development group representing stakeholders with a wide range of ex - pertise, such as clinician generalists and specialists, content experts, methodolo - gists, public health specialists, economists, patients, and advocates ( Sniderman & Furberg, 2009; Tunkel & Jones, 2015). The expert panelists should provide full disclosure and accounting of how they addressed intellectual and fi nancial conflicts that could influence the guidelines (Joshi et al., 2019) . The guideline development group should use a rigorous process for assembling, evaluati ng, and summarizing the published evidence to develop the CPG recommendation s (Ransohoff et al., 2013). The strength of the recommendations should b e graded to provide transparency about the certainty of the data and values appli ed in the process (Dahm et al., 2009). The Grading of Recommendations, Assessmen t, De - velopment and Evaluations (GRADE) strategy is a commonly used system t o as - sess the strength of CPG recommendations (weak or strong), as well as the qual - ity of evidence (high, moderate, low/very low) they are based on (Neu mann et al., 2016). For example, a 1A recommendation is one that is strong and is based Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 165 on high-quality evidence. This system has been widely adopted by organiz ations such as the World Health Organization (WHO), as well as groups such as the American College of Chest Physicians (CHEST). Use of consistent gradin g sys - tems can align evidence synthesis methods and result in more explicit an d easier- to-understand recommendations for the end user (Diekemper et al., 2018) . Consensus or position statements (CSs) may be developed instead of a C PG when the available evidence is insufficient due to lack of high-qualit y evidence or conflicting evidence, or scenarios where assessing benefits and r isks of an in - tervention are challenging (Joshi et al., 2019). Consensus statements (CSs) are broad statements of best practice based on consensus opinion of the conv ened expert panel and possibly small bodies of evidence; are most often meant to guide members of a professional organization in decision-making; and may not provide specific algorithms for practice (Lopez-Olivo et al., 2008). Hundreds of different groups have developed several thousand different C PGs and CSs (Ransohoff et al., 2013). It has been noted that the methodolo gical qual - ity of CPGs varies considerably by developing organizations, creating cl inician concerns over the use of guidelines and potential impact on patients (D ahm et al., 2009). Formal methods have been developed to assess the quality of CPGs. A group of researchers from 13 countries, Appraisal of Guidelines Resear ch and Evaluation (AGREE) Collaboration, developed a guideline appraisal inst rument with documented reliability and validity. It has been shown that high-quality guidelines were more often produced by government-supported organization s or a structured, coordinated program (Fervers et al., 2005). The AGREE instru - ment, revised in 2013, now has 23 items and is organized into six domain s (The AGREE Research Trust, 2013; Brouwers et al., 2010): ■ Scope and purpose ■ Stakeholder involvement ■ Rigor of development ■ Clarity of presentation ■ Applicability ■ Editorial independence Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 166 Despite the availability of the AGREE tool and others like it, the quali ty of guidelines still vary greatly in terms of how they are developed and how the results are reported (Kuehn, 2011). A recent evaluation of more than 6 00 CPGs found that while quality of the CPGs has increased over time, the qualit y scores assessed by the tool have remained moderate to low (Alonso-Coello, 2010 ). In response to concern about CPG quality, an IOM committee was commissioned to study the CPG development process. The committee (IOM, 2011) develo ped a comprehensive set of criteria outlining “standards for trustworthin ess” for clinical practice guidelines development (see Table 7.1). Table 7.1 Clinical Practice Guideline (CPG) Standards and Description Standard Description Establish transparency Funding and development process should be publicly available. Disclose conflict(s) of interest (COI) Individuals who create guidelines and panel chairs should be free from conflicts of interest (COI). Funders are excluded from CPG development. All COIs of each guideline development group member should be disclosed. Balance membership of guideline development group Guideline developers should include multiple disciplines, patients, patient advocates, or patient consumer organizations. Use systematic reviews CPG developers should use systematic reviews that meet IOM’s Standards for Systematic Reviews of Comparative Effectiveness Research. Rate strength of evidence and recommendations Rating has specified criteria rating the level of evidence and strength of recommendations. Articulate recommendations Recommendations should follow a standard format and be worded so that compliance can be evaluated. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 167 External review External reviews should represent all relevant stakeholders. A draft of the CPG should be available to the public at the external review stage or directly afterward. Update guidelines CPGs should be updated when new evidence suggests the need, and the CPG publication date, date of systematic evidence review, and proposed date for future review should be documented. For more than 20 years, the National Guideline Clearinghouse (NGC), an initiative of the Agency for Healthcare Research and Quality (AHRQ), U S Department of Health and Human Services, was a source of high-quality guidelines and rigorous standards. This initiative was ended due to lack of funding in 2018. At that time, the Emergency Care Research Institute (E CRI), a non-profit, independent organization servicing the healthcare indust ry, committed to continuing the legacy of the NGC by creating the ECRI Guide lines Trust. The trust houses hundreds of guidelines on its website, which is f ree to access (https://guidelines.ecri.org). ECRI summarizes guidelines in sn apshots and briefs and appraises them against the Institute of Medicine (IOM) Stan dards for Trustworthy Guidelines by using TRUST (Transparency and Rigor Using Standards of Trustworthiness) scorecards (ECRI, 2020). This source can be useful to EBP teams needing guidelines for evidence appraisals. Literature Reviews (Level V Evidence) Literature review is a broad term that refers to a summary or synthesis of published literature without systematic appraisal of evidence quality or strength. The terminology of literature reviews has evolved into many different ty pes, with different search processes and degrees of rigor (Peters et al., 2015; S nyder, 2019; Toronto et al., 2018). Traditional literature reviews are not confined to scientific literature; a review may include nonscientific literature such as theo retical papers, reports of organizational experience, and opinions of experts. Such revi ews possess some of the desired attributes of a systematic review, but not the same standardized approach, appraisal, and critical review of the studies. Li terature review types also vary in completeness and often lack the intent of incl uding all Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 168 available evidence on a topic (Grant & Booth, 2009). Qualities of diff erent types of literature reviews and their product are outlined in Table 7.2. Specific challenges may arise in conducting or reading a literature re view. One challenge is that not all the articles returned in the search answer the specific questions being posed. When conducting a literature search, attention to the details of the search parameters—such as the Boolean operators or usi ng the correct Medical Subject Headings (MeSH)—may provide a more comprehe nsive search, as described in Chapter 5. Shortcomings of available literature should be described in the limitations section. If only some of the articles an swer the questions posed while reading a literature review, the reader must interpret the findings more carefully and may need to identify additional literature reviews that answer the remaining questions. Another common problem in literatur e re - views is double counting of study results, which may influence the res ults of the literature review. Double counting can take many forms, including simple double counting of the same study in two included meta-analyses, double countin g of control arms between two interventions, imputing data missing from inclu ded studies, incomplete reporting of data in the included studies, and other s (Senn, 2009). Recommendations to reduce the incidence of double counting inclu de vigilance about double counting, making results verifiable, describing analysis in detail, judging the process not the author in review, and creating a culture of cor - rection (Senn, 2009). Integrative Reviews (Level V Evidence) An integrative review is more rigorous than a literature review but lacks the methodical rigor of a systematic review with or without meta-analysis. I t summarizes evidence that is a combination of research and theoretical li terature and draws from manuscripts using varied methodologies (e.g., experiment al, non-experimental, qualitative). The purpose of an integrative review va ries widely compared to a systematic review; these purposes include summarizi ng evidence, reviewing theories, defining concepts, and other purposes. W ell-defined and clearly presented search and selection strategies are critical. Beca use diverse methodologies may be combined in an integrative review, quality evaluation or Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 169 further analysis of data is complex. Unlike the literature review, however, an integrative review analyzes, compares themes, and notes gaps in the sele cted literature (Whittemore & Knafl, 2005). Table 7.2 Types of Literature Review Type Characteristics Result Literature review An examination of current literature on a topic of interest. The purpose is to create context for an inquiry topic. Lack a standardized approach for critical appraisal and review. Often includes diverse types of evidence. Summation or identification of gaps Critical review Extensive literature research from diverse sources, but lacks systematicity. Involves analysis and synthesis of results, but focuses on conceptual contribution of the papers, not their quality. A hypothesis or model Rapid review Time-sensitive assessment of current knowledge of practice or policy issue, using systematic review methods. Time savings by focus on narrow scope, using less comprehensive search, extracting only key variables, or performing less rigorous quality appraisal. Increased risk of bias due to limited time frame of literature or quality analysis. Timely review of current event or policy Qualitative systematic review Method to compare or integrate finding from qualitative studies, to identify themes or constructs in or across qualitative studies. Useful when knowledge of preferences and attitudes are needed. Standards for performing this type of review are in early stages, so rigor may vary. New theory, narrative, or wider understanding Scoping review Determines nature and extent of available research evidence, or maps a body of literature to identify boundaries of research evidence. Limitations in rigor and duration increase risk of bias. Limited quality assessment. Identify gaps in research, clarify key concepts, report on types of evidence continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 170 Type Characteristics Result State-of-the-art review A type of literature review that addresses more current matters than literature review. Review may encompass a recent period, so could miss important earlier works. New perspectives on an issue or an area in need of research Systematized review Includes some, but not all, elements of systematic review. Search strategies are typically more systematic than other literature reviews, but synthesis and quality assessment are often lacking. Form a basis for further complete systematic review or dissertation Interpreting Evidence From Summaries of Research Evidence Evaluating the quality of research that composes a body of evidence, for the purpose of developing CPGs, CSs, or performing a literature review, can be difficult. In 1996, editors of leading medical journals and researcher s developed an initial set of guidelines for reporting results of randomized control led clinical trials, which resulted in the CONsolidated Standards of Reporting Trials (CONSORT) Statement (Altman & Simera, 2016). Following revisions to the initial CONSORT flowchart and checklist, the Enhancing the QUality and Transparency Of health Research (EQUATOR) program was started in 2006. Since then, the EQUATOR Network has developed reporting guidelines for many different types of research, including observational studies, systematic reviews, case reports, qualitative studies, quality improvement reports, and clin ical practice guidelines (EQUATOR Network, n.d.). These guidelines have made it easier to assess the quality of research reports. However, no similar guidelines exist for assessing the quality of nonsystematic literature reviews or i ntegrative reviews. The Institute of Medicine (IOM, now the National Academy of Medicine) publication Clinical Practice Guidelines We Can Trust established standards for CPGs to be “informed by a systematic review of evidence and an assess ment Table 7.2 Types of Literature Review (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 171 of the benefits and harms of alternative care options” (IOM, 2011, p. 4). This standardization reduced the number of guidelines available in the Nation al Guideline Clearinghouse by nearly 50% (Shekelle, 2018). However, the number of guidelines then rapidly increased, often covering the same topic but developed by different organizations, which led to redundancy if the guidelines we re similar, or uncertainty when guidelines differed between organizations (Shekelle , 2018). The National Guideline Clearinghouse free access was eliminated in 2018 and replaced with the ECRI Trust. Limited evidence or conflicting recommendations require that the healthcare professional utilize critical thinking and c linical judgment when making clinical recommendations to healthcare consumers. Guidelines are intended to apply to the majority of clinical situations, but the unique needs of specific patients may also require the use of clinicia n judgment. Clinical practice guideline development, as well as utilization, also ne eds to con - sider health inequity , avoidable differences in health that are rooted in lack of fairness or injustice (Welch et al., 2017). Characteristics to consider include the acronym PROGRESS-Plus: place of residence, race/ethnicity/culture/langua ge, oc - cupation, gender, religion, education, socioeconomic status, social capital; plus others including age, disability, sexual orientation, time-dependent situations, and relationships. Barriers to care associated with these characteristics ar e related to access to care/systems issues or provider/patient behaviors, attitudes, and con - scious or unconscious biases (Welch et al., 2017). Some of these characteristics merit their own guideline; for example, a guideline related to asthma ca re for adults may not be applicable to the care of children with asthma. Because guidelines are developed from systematic reviews, EBP teams shou ld consider that although groups of experts create these guidelines, which frequently carry a professional society’s seal of approval, the opinions of guideline developers that convert data to recommendations require subjective judgments that, in turn, leave room for error and bias (Mims, 2015). Actual and potential confl icts of interest are increasingly common within organizations and experts who cr eate CPGs. Conflicts of interest may encompass more than industry relations hips; for example, a guideline that recommends increased medical testing and visit s may also serve the interests of clinicians, if they are the sole or predomin ate members of the guideline development group (Shekelle, 2018). Conflicts of in terest may Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 172 also be the result of financial or leadership interests, job descripti ons, personal research interests, or volunteer work for organizations, among others. T he IOM panel recommended that, whenever possible, individuals who create the guidelines should be free from conflicts of interest; if that is n ot possible, however, those individuals with conflicts of interest should make up a minori ty of the panel and should not serve as chairs or cochairs (IOM, 2011). I n addition, specialists who may benefit from implementation of the guideline shoul d be in the minority. Professional associations have purposes besides development of clinical practice guidelines, including publishing, providing education, and advocacy for public health as well as their members through political lobbying (Nissen, 201 7). Rela - tionships between medical industry, professional associations, and experts who develop guidelines must be carefully assessed for actual or potential co nflicts of interest, and these conflicts must be transparently disclosed and m anaged. Relationships between healthcare providers and the medical industry are not inherently bad; these relationships foster innovation and development, a llow a partnership of shared expertise, and keep clinicians informed of advan ces in treatment as well as their safe and effective use (Sullivan, 2018). Ho wever, there is potential for undue influence, so these relationships must be lever aged with transparency to prevent abuses (Sullivan, 2018). Key elements to note when appraising Level IV evidence and rating eviden ce quality are identified in Table 7.3 and in the JHNEBP Nonresearch Evidence Ap - praisal Tool (see Appendix F). Not all these elements are required, but the att ri - butes listed are some of the evaluative criteria. Table 7.3 Desirable Attributes of Documents Used to Answer an EBP Question Attribute Question Applicability to topic Does the document address the particular practice question of interest (same intervention, same population, same setting)? Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 173 Comprehensiveness of search strategy Do the authors identify search strategies beyond the typical databases, such as PubMed, PsycInfo, and CINAHL? Are published and unpublished works included? Methodology Do the authors clearly specify how inclusion and exclusion criteria were applied? Do the authors specify how data were analyzed? Consistency of findings Are the findings organized and synthesized in a clear and cohesive manner? Are tables organized, and do they summarize the findings in a concise manner? Study quality assessment Do the authors clearly describe how the review addresses study quality? Limitations Are methodological limitations disclosed? Has double counting been assessed? Conclusions Do the conclusions appear to be based on the evidence and capture the complexity of the topic? Collective expertise Was the review and synthesis done by an expert or group of experts? Adapted from Conn (2004), Stetler et al. (1998), and Whittemore (20 05). Organizational Experience Organizational experience often takes the form of quality improvement ( QI) and economic or program evaluations. These sources of evidence can occur at any level in the organization and can be internal to an EBP team’s organization or published reports from external organizations. Quality Improvement Reports (Level V Evidence) The Department of Health and Human Services defines quality improvement (QI) as “consisting of systematic and continuous actions that lead to measurable improvement in health care services and health status of targeted patien t groups” (Connelly, 2018, p. 125). The term is used interchangeably with quality management, performance improvement, total quality management, and conti nuous quality improvement (Yoder-Wise, 2014). These terms refer to the application Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 174 of improvement practices using tools and methods to examine workflows, processes, or systems within a specific organization with the aim of s ecuring positive change in a particular service (Portela et al., 2015). QI use s process improvement techniques adapted from industry, such as Lean and Six Sigma frameworks, which employ incremental, cyclically implemented changes wit h Plan-Do-Study-Act (PDSA) cycles (Baker et al., 2014). QI projects produce evidence of valuable results in local practice and m ay be published as quality improvement reports in journals (Carter et al., 20 17). EBP teams are reminded that the focus of QI studies is to determine whether an inter - vention works to improve processes, and not necessarily for scientific advance - ment, which is the focus of health services research. Thus, lack of gene ralizability of results is a weakness, as is lack of structured explanations of mecha nisms of change and low quality of reports (Portela et al., 2015). During their review of nonresearch evidence, EBP team members should exa mine internal QI data relating to the practice question as well as QI initiat ives based on similar questions published by peer institutions. As organizations be come more mature in their QI efforts, they become more rigorous in the approa ch, the analysis of results, and the use of established measures as metrics (Ne whouse et al., 2006). Organizations that may benefit from QI reports’ publis hed findings need to make decisions regarding implementation based on the characteris tics of their organization. As the number of quality improvement reports has grown, so has concern a bout the quality of reporting. In an effort to reduce uncertainty about what informa - tion should be included in scholarly reports of health improvement, the Stan - dards for Quality Improvement Reporting Excellence (SQUIRE) were publi shed in 2008 and revised in 2015 ( http://www.squire-statement.org ). The SQUIRE guidelines list and explain items that authors should consider including in a re - port of system-level work to improve healthcare (Ogrinc et al., 2015). Although evidence obtained from QI initiatives is not as strong as that obtained by scientif - ic inquiry, the sharing of successful QI stories has the potential to identify fut ure EBP questions, QI projects, and research studies external to the organiz ation. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 175 An example of a quality improvement project is a report from an emergenc y department (ED) and medical intensive care unit (MICU) on transfer t ime delays of critically ill patients from ED to MICU (Cohen et al., 2015). Using a clinical microsystems approach, the existing practice patterns were identified, and multiple causes that contributed to delays were determined. The Plan-Do- Study- Act model was applied in each intervention to reduce delays. The interve ntion reduced transfer time by 48% by improving coordination in multiple stage s. This Level V evidence is from one institution that implemented a quality impr ovement project. Economic Evaluation (Level V Evidence) Economic measures in healthcare facilities provide data to assess the co st associated with practice changes. Cost savings assessments can be powerf ul information as the best practice is examined. In these partial economic evaluations, there is not a comparison of two or more alternatives but rather an expl anation of the cost to achieve a particular outcome. Economic evaluations intend ed to evaluate quality improvement interventions in particular are mainly conc erned with determining whether the investment in the intervention is justifi able (Portela et al., 2015). Full economic evaluations apply analytic techniques to identify, measure, and compare the costs and outcomes of two or more alternative programs o r interventions (Centers for Disease Control and Prevention [CDC], 2007) . Costs in an economic analysis framework are the value of resources, either the oretical or monetary, associated with each treatment or intervention; the consequences are the health effects of the intervention (Gray & Wilkinson, 2016). A common economic evaluation of healthcare decision-making is a cost-effectivenes s analysis, which compares costs of alternative interventions that produce a common health outcome in terms of clinical units (e.g., years of life). Although the results of such an analysis can provide justification for a program, empirical ev idence can provide support for an increase in program funding or a switch from one program to another (CDC, 2007). Another type of full economic evaluation is co st benefit analysis. In this type of economic evaluation, both costs and benefits (or health effects) are expressed in monetary units (Gommersall et al., 2015). A n EBP team Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 176 can find reports of cost effectiveness and economic evaluations (Leve l V) in published data or internal organizational reports. One example is “Th e Value of Reducing Hospital-Acquired Pressure Ulcer Prevalence” (Spetz et al., 2013). This study assessed the cost savings associated with implementing nursing app roaches to prevent hospital-acquired pressure ulcers (HAPU). Financial data can be evaluated as listed on the JHNEBP Nonresearch Evid ence Appraisal Tool (Appendix F). When reviewing reports including economic analyses, examine the aim, method, measures, results, and discussion for clarity. Carande-Kulis et al. (2000) recommend that standard inclusion criteria for eco - nomic studies have an analytic method and provide sufficient detail re garding the method and results. It is necessary to assess the methodological quality of studies addressing questions about cost-savings and cost-effectiveness (Gomersa ll et al., 2015). The Community Guide “Economic Evaluation Abstraction Form” (2010), which can be used to assess the quality of economic evaluations, suggest s consid - ering the following questions: ■ Was the study population well described? ■ Was the question being analyzed well defined? ■ Did the study define the time frame? ■ Were data sources for all costs reported? ■ Were data sources and costs appropriate with respect to the program and population being tested? ■ Was the primary outcome measure clearly specified? ■ Did outcomes include the effects or unintended outcomes of the program? ■ Was the analytic model reported in an explicit manner? ■ Were sensitivity analyses performed? When evaluating an article with a cost analysis, it is important to reco gnize that not all articles that include an economic analysis are strictly financ ial evaluations. Some may use rigorous research designs and should be appraised using the Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 177 Research Evidence Appraisal Tool (see Appendix E). For example, a report by Yang, Hung, and Chen (2015) evaluates the impact of different nursing s taffing models on patient safety, quality of care, and nursing costs. Three mixed models of nursing staffing, where the portion of nurses compared with nurse a ides was 76% (n = 213), 100% (n = 209), and 92% (n = 245), were applied during three different periods between 2006–2010. Results indicated that units wit h a 76% proportion of RNs made fewer medication errors and had a lower rate of ventilation weaning, and units with a 92% RN proportion had a lower rate of bloodstream infections. The 76% and 92% RNs groups showed increased urin ary tract infection and nursing costs (Yang et al., 2015). After a review of this study, the EBP team would discover that this was actually a descriptive, retros pective cohort design study, which is why use of research appraisal criteria to judge the strength and quality of evidence would be more appropriate. Program Evaluation (Level V Evidence) Program evaluation is systematic assessment of all components of a progr am through the application of evaluation approaches, techniques, and knowle dge to improve the planning, implementation, and effectiveness of programs ( Chen, 2006). To understand program evaluation, we must first define “program.” A program has been described as the mechanism and structure used to deliver an intervention or a set of synergistically related interventions (Issel, 2016). Programs can take a variety of forms but have in common the need for thorough pla nning to identify appropriate interventions and the development of organizatio nal structure in order to effectively implement the program interventions. M onitoring and evaluation should follow so that findings can be used for continue d assessment and refinement of the program and its interventions but als o to measure the position, validity, and outcomes of the program (Issel, 2016; Whitehead, 2003). Although program evaluations are commonly conducted within a framework of scientific inquiry and designed as research studies, most internal program evaluations are less rigorous (Level V). Frequently, they comprise pre- or post- implementation data at the organizational level accompanied by qualitati ve Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 178 reports of personal satisfaction with the program. For example, in a pro gram evaluation of a patient navigation program, program value and effectiven ess were assessed in terms of timeliness of access to cancer care, resolutio n of barriers, and satisfaction in 55 patients over a six-month period (Koh et al., 2010). While these measures may be helpful in assessing this particular program, they are not standard, accepted measures that serve as benchmarks; thus, close consideration of the methods and findings is crucial for the EBP team when considering this type of evidence (AHRQ, 2018). Expert Opinion (Level V Evidence) Expert opinions are another potential source of valuable information. Co nsisting of views or judgments by subject matter experts and based on their combi ned knowledge and experience in a particular topic area, these can include c ase reports, commentary articles, podcasts, written or oral correspondence, and letters to the editor or “op-ed” pieces. Assessing the quality of this evidence requires the EBP team do their due diligence in vetting the author’s expert status. Characteristics to consider include education and training, existing bod y of work, professional and academic affiliations, and their previous publi cations and communications in the area of interest. For instance, is the author reco gnized by state, regional, national, or international groups for their expertis e? To what degree have their publications been cited by others? Do they have a history of being invited to give lectures or speak at conferences about the issue? One exemplar is an article by Davidson and Rahman (2019), who share th eir expert opinion on the evolving role of the Clinical Nurse Specialist (C NS) in critical care. This article could be rated as high quality, because they are both experienced CNSs, are doctoral-prepared, belong to several well-establis hed professional organizations, and are leaders in their respective specialt ies. Case Report (Level V Evidence) Case reports are among the oldest forms of medical scholarship and, alth ough appreciation for them has waxed and waned over time (Nissen & Wynn, 2014), they are recently seeing a revival (Bradley, 2018). Providing clinical description Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 179 of a particular patient, visit, or encounter, case reports can help support development of clinicians’ judgment, critical thinking, and decision- making (Oermann & Hays, 2016). They frequently involve unique, interesting, o r rare presentations and illustrate successful or unsuccessful care delivery ( Porcino, 2016). Multiple case reports can be presented as a case series, compari ng and contrasting various aspects of the clinical issue of interest (Porcino, 2016). Case reports and case series are limited in their generalizability, lack controls, and entail selection bias (Sayre et al., 2017), and, being based on experi ential and nonresearch evidence, are categorized as Level V. Case studies, in comparison, are more robust, intensive investigations o f a case. They make use of quantitative or qualitative data, can employ statistica l analy - ses, investigate the case of interest over time, or evaluate trends. Thu s, case studies are considered quantitative, qualitative, or mixed-methods resea rch (San - delowski, 2011) and should be evaluated as such. (See Chapter 6 for mo re infor - mation about the different types of research.) Community Standard (Level V Evidence) For some EBP topics, it is important to gather information on community practice standards. To do so, the team identifies clinicians, agencies, or organizations to contact for relevant insights, determines their questio ns, and prepares to collect the data in a systematic manner. There are myriad ways to access communities: email, social media, national or regional conference s, online databases, or professional organization listserv forums. For instance, t he Society of Pediatric Nurses has a web-based discussion forum for members to quer y other pediatric nurses on various practice issues and current standards (http:// www.pedsnurses.org/p/cm/ld/fid=148). In an example of investigating community standards, Johns Hopkins Univer - sity School of Nursing students were assisting with an EBP project askin g: “Does charge nurse availability during the shift affect staff nurse s atisfaction with workflow?” An EBP team member contacted local hospitals to det ermine whether charge nurses had patient assignments. Students developed a data sheet with questions about the healthcare facility, the unit, the staffing pattern, and Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 180 staff comments about satisfaction. The students reported the number of u nits contacted and responses, information source, and number of sources using the Nonresearch Evidence Appraisal Tool (Appendix F). Additionally, this approach provided an opportunity to network with other clinicians about a clinica l issue. Clinician Experience (Level V Evidence) Clinician experience—gained through first-person involvement with a nd observation of clinical practice—is another possible EBP information source. In the increasingly complex, dynamic healthcare environment, interprofessio nal teams must work collaboratively to provide safe, high-quality, and cost-effective care. This collaboration allows many opportunities for collegial discuss ion and sharing of past experiences. Newer clinicians tend to rely more heav ily on structured guidelines, protocols, and decision-aids while increasing the ir practical skillset. With time and exposure to various practice situations, experiences build clinical expertise, allowing for intuitive and holistic understanding of patients and their care needs. When seeking clinician experience to inform an EBP project, the team should evaluate the information source’s experiential credibility, the clarity of opinion expressed, and the degree to which evidence from various experienced clinicians is consistent. Patient/Consumer Experience (Level V Evidence) Patients are consumers of healthcare, and the term consumer also refers to a larger group of individuals using health services in a variety of settin gs. A patient-centered healthcare model recognizes an individual’s unique health needs and desired outcomes as the driving force of all healthcare decisions an d quality measurements (NEJM Catalyst, 2017). Patient-centered healthcare expect s that patients and families play an active, collaborative, and shared role in decisions, both at an individual level and a system level. Professional organizatio ns and healthcare institutions are increasingly incorporating patient and famil y expertise into guidelines and program development. Guidelines that do not recogniz e the importance of the patient’s lived experience are set up to fail when the guidelines do not meet the needs of patients and families. Unique characteristics r elated to Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 181 personal and cultural values shape an individual’s experience of health and their goals for health (DelVecchio Good & Hannah, 2015). The expert healthcare provider incorporates patient preferences into cli nical decision-making by asking the following questions: ■ Are the research findings and nonresearch evidence relevant to this particular patient’s care? ■ Have all care and treatment options based on the best available evidence been presented to the patient? ■ Has the patient been given as much time as necessary to allow for clarification and consideration of options? ■ Have the patient’s expressed preferences and concerns been considered when planning care? The answer to these questions requires ethical practice and respect for a patient’s autonomy. Healthcare providers should also carefully assess the patient/family’ s level of understanding and provide additional information or resources i f needed. Combining sensitivity to and understanding of individual patient needs and thoughtful application of best evidence leads to optimal patient-cen tered outcomes. The mission of the Patient-Centered Outcomes Research Institut e (PCORI), established by the Affordable Care Act 2010, is to “help p eople make informed healthcare decisions, and improve healthcare delivery and outco mes, by producing and promoting high-integrity, evidence-based information that comes from research guided by patients, caregivers, and the broader heal thcare community” (PCORI, n.d., para. 2). To achieve this goal, PCORI engages stakeholders (including patients, clinicians, researchers, payers, indu stry, purchasers, hospitals and healthcare systems, policymakers, and educator s) into the components of comparative-effectiveness research through stakeholder input, consultation, collaboration, and shared leadership (PCORI, n.d.). Engaging consumers of healthcare in EBP goes beyond individual patient encounters. Consumer organizations can play a significant role in supp orting implementation and utilization of EBP. Consumer-led activities can take the Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 182 form of facilitating research to expedite equitable adoption of new and existing best practices, promoting policies for the development and use of advoca cy tool kits, and influencing provider adoption of EBP (DelVecchio Good & Hannah, 2015). Many consumer organizations focus on patient safety initiatives, such as Campaign Zero and preventable hospital harms, or the Josie King Foundati on and medical errors. In examining the information provided by consumers, the EBP team should consider the credibility of the individual or group. What se gment and volume of the consumer group do they represent? Do their comments an d opinions provide any insight into your EBP question? Best Practices Companies A relatively recent addition to sources of evidence and best practices a re companies that provide consultative business development services. One e xample, founded in 1979, is The Advisory Board Company; the current mission is t o improve healthcare by providing evidence and strategies for implementing best practices (Advisory Board, 2020). The Advisory Board Company has many specific subgroups based on clinical specialty (e.g., cardiovascular, oncology, imaging) and professions (physician executive and nursing executive, a mong others). Membership allows access to a wide variety of resources, inclu ding best practices research, strategic and leadership consultation, and organizat ional benchmarking using proprietary databases (Advisory Board, 2020). Benefi ts of utilizing this type of data in development of EBP are the power of colle ctive, international experiences and the ability to source data from one’s own institution, but EBP teams should also consider the cost of membership as well as the inherent focus on organizational efficiency, which may lead to some degree of bias. Recommendations for Healthcare Leaders Time and resource constraints compel leaders to find creative ways to s upport integration of new knowledge into clinical practice. The amount of time the average staff member must devote to gathering and appraising evidence is limited. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 183 Therefore, finding the most efficient way to gain new knowledge shou ld be a goal of EBP initiatives. Healthcare leaders should not only support staff edu cation initiatives that teach how to read and interpret nonresearch evidence bu t also become familiar themselves with desired attributes of such information s o that they can serve as credible mentors in the change process. Another challenge for clinicians is to combine the contributions of the two evi - dence types (research and nonresearch) in making patient care decision s. Accord - ing to Melnyk and Fineout-Overholt (2006), no “magic bullet” or standard for - mula exists with which to determine how much weight should be applied to each of these factors when making patient care decisions. It is not suffici ent to apply a standard rating system that grades the strength and quality of evidence without determining whether recommendations made by the best evidence are compat ible with the patient’s values and preferences and the clinician’s expertise. Healthcare leaders can best support EBP by providing clinicians with the knowledge and skills necessary to appraise quantitative and qualitative research evide nce within the context of nonresearch evidence. Only through continuous learning ca n clini - cians and care teams gain the confidence needed to incorporate the bro ad range of evidence into the more targeted care of individual patients. Summary This chapter describes nonresearch evidence and strategies for evaluatin g this evidence and recommends approaches for building clinicians’ capa city to appraise nonresearch evidence to inform their practice. Nonresearch evid ence includes summaries of evidence (clinical practice guidelines, consensus or position statements, literature reviews); organizational experience (q uality improvement and financial data); expert opinion (individual commenta ry or opinion, case reports); community standards; clinician experience; and consumer experience. This evidence includes important information for practice de cision. For example, consumer preference is an essential element of the EBP proc ess with increased focus on patient-centered care. In summary, although nonresearch evidence does not have the rigor of research evidence, it does provide i mportant information for informed practice decisions. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 184 References Abrahamson, K. A., Fox, R. L., & Doebbeling, B. N. (2012). Facilitator s and barriers to clinical practice guideline use among nurses. American Journal of Nursing , 12 (7), 26–35. https://doi. org/10.1097/01.NAJ.0000415957.46932.bf Advisory Board. (2020). About us. https://www.advisory.com/en/about-us Agency for Healthcare Research and Quality. (2018). Patient self-management support programs: An evaluation . https://www.ahrq.gov/research/findings/final-reports/ptmgmt/evaluation.html The AGREE Research Trust. (2013). The AGREE II instrument . http://www.agreetrust.org/ wp-content/uploads/2013/10/AGREE-II-Users-Manual-and-23-item-Instrument_ 2009_ UPDATE_2013.pdf Alonso-Coello, P., Irfan, A., Sola, I., Delgado-Noguera, M., Rigau, D., Tort, S., Bonfil, X., Burgers, J., Shunemann H. (2010). The quality of clinical practice guidelines o ver the past two decades: A systematic review of guideline appraisal studies. BMJ Quality & Safety, 19 (6), e58. doi:10.1136/ qshc.2010.042077 Altman, D. G., & Simera, I. (2016). A history of the evolution of guid elines for reporting medical research: The long road to the EQUATOR Network. Journal of the Royal Society of Medicine , 109 (2), 67–77. https://doi.org/10.1177/0141076815625599 Baker, K. M., Clark, P. R., Henderson, D., Wolf, L. A., Carman, M. J., Manton, A., & Zavotsky, K. E. (2014). Identifying the differences between quality improvement, evidence-based practice, and original research. Journal of Emergency Nursing , 40(2), 195–197. https://doi.org/10.1016/j. jen.2013.12.016 Bradley, P. J. (2018). Guidelines to authors publishing a case report: The need for quality improvement. ACR Case Reports , 2(4). https://doi.org/10.21037/acr.2018.04.02 Brouwers, M. C., Kho, M. E., Browman, G. P., Burgers, J. S., Cluzeau, F., Feder, G., Fervers, B., Graham, I. D., Hanna, S. E., & Makarski, J. (2010). Development of the AGREE II, part 1: Performance, usefulness and areas for improvement. Canadian Medical Association Journal , 182 (10), 1045–1062. https://doi.org/10.1503/cmaj.091714 Carande-Kulis, V. G., Maciosek, M. V., Briss, P. A., Teutsch, S. M., Zaza, S., Truman, B. I., Messonnier, M. L., Pappaioanou, M., Harris, J. R., & Fielding, J. (2000). Method s for systematic reviews of economic evaluations for the guide to community pr eventive service. American Journal of Preventive Medicine , 18 (1S), 75–91. https://doi.org/10.1016/ s0749-3797(99)00120-8 Carper, B. A. (1978). Fundamental patterns of knowing in nursing. ANS Advances in Nursing Science , 1(1), 13–24. https://doi.org/10.1097/00012272-197810000-00004 Carter, E. J., Mastro, K., Vose, C., Rivera, R., & Larson, E. L. (2017). Clarifying the conundrum: Evidence-based practice, quality improvement, or research? The clinical scholarship continuum. Journal of Nursing Administration , 47(5), 266–270. https://doi.org/10.1097/ NNA.0000000000000477 Centers for Disease Control and Prevention. (2007). Economic evaluation of public health preparedness and response efforts. http://www.cdc.gov/owcd/EET/SeriesIntroduction/TOC.html Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods rese arch. Res Sch , 13 (1), 75–83. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 185 Cohen, R. I., Kennedy, H., Amitrano, B., Dillon, M., Guigui, S., & Kanner, A. (2015). A quality improvement project to decrease emergency department and medical intensi ve care unit transfer times. Journal of Critical Care , 30 (6), 1331–1337. https://doi.org/10.1016/j.jcrc.2015.07.017 Community Guide economic evaluation abstraction form, Version 4.0. (2010). https://www. thecommunityguide.org/sites/default/files/assets/EconAbstraction_v5.pd f Conn, V. S. (2004). Meta-analysis research. Journal of Vascular Nursing , 22 (2), 51–52. https://doi. org/10.1016/j.jvn.2004.03.002 Connelly, L. (2018). Overview of quality improvement. MEDSURG Nursing , 27 (2), 125–126. Dahm, P., Yeung, L. L., Galluci, M., Simone, G., & Schunemann, H. J. (2009). How to use a clinical practice guideline. The Journal of Urology , 181 (2), 472–479. https://doi.org/10.1016/ j.juro.2008.10.041 Davidson, P. M., & Rahman, A. R. (2019). Time for a renaissance of the clinical nurse specialist role in critical care? AACN Advanced Critical Care , 30 (1), 61–64. https://doi.org/10.4037/ aacnacc2019779 DelVecchio Good, M.-J., & Hannah, S. D. (2015). “Shattering culture” : Perspectives on cultural competence and evidence-based practice in mental health services. Transcultural Psychiatry , 52(2), 198–221. https://doi.org/10.1177/1363461514557348 Diekemper, R. L., Patel, S., Mette, S. A., Ornelas, J., Ouellette, D. R., Casey, K. R. (2018). Making the GRADE: CHEST updates its methodology. Chest , 153 (3), 756–759. https://doi. org/10.1016/j.chest.2016.04.018 Emergency Care Research Institute (ECRI). 2020. https://guidelines.ecr i.org/about-trust-scorecard/ EQUATOR Network. (n.d.). Reporting guidelines for main study types. EQUATOR Network: Enhancing the QUAlity and Transparency of Health Research. https://www.equator-network.org Fervers, B., Burgers, J. S., Haugh, M. C., Brouwers, M., Browman, G., Cl uzeau, F., & Philip, T. (2005). Predictors of high quality clinical practice guidelines: Exa mples in oncology. International Journal for Quality in Health Care , 17 (2), 123–132. https://doi.org/10.1093/ intqhc/mzi011 Gomersall, J. S., Jadotte, Y. T., Xue, Y., Lockwood S., Riddle D., & Preda, A. (2015). Conducting systematic reviews of economic evaluations. International Journal of Evidence Based Healthcare , 13(3), 170–178. https://doi.org/10.1097/XEB.0000000000000063 Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis o f 14 review types and associated methodologies. Health Information and Libraries Journal , 26(2), 91–108. https://doi. org/10.1111/j.1471-1842.2009.00848.x Gray, A. M., & Wilkinson, T. (2106). Economic evaluation of healthcare interventions: Old and new directions. Oxford Review of Economic Policy , 32 (1), 102–121. Institute of Medicine. (2011). Clinical practice guidelines we can trust. The National Academies Press. Retrieved from http://www.nationalacademies.org/hmd/Reports/2011/Clinical-Practice- Guidelines-We-Can-Trust/Standards.aspx Issel, L. M. (2016). Health program planning and evaluation: What nurs e scholars need to know. In J. R. Bloch, M. R. Courtney, & M. L. Clark (Eds.), Practice-based clinical inquiry in nursing for DNP and PhD research: Looking beyond traditional methods (1st ed.), 3–16. Springer Publishing Company. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 186 Joshi, G. P, Benzon, H. T, Gan, T. J., & Vetter, T. R. (2019). Consistent definitions of clinical practice guidelines, consensus statements, position statements, and prac tice alerts. Anesthesia & Analgesia , 129 (6), 1767–1769. https://doi.org/10.1213/ANE.0000000000004236 Koh, C., Nelson, J. M., & Cook, P. F. (2010). Evaluation of a patient navigation program. Clinical Journal of Oncology Nursing , 15(1), 41–48. https://doi.org/10.1188/11.CJON.41-48 Kuehn, B. M. (2011). IOM sets out “gold standard” practices for creating guidelines, systematic reviews. JAMA , 305 (18), 1846–1848. Lopez-Olivo, M. A., Kallen, M. A., Ortiz, Z., Skidmore, B., & Suarez-Alm azor, M. E. (2008). Quality appraisal of clinical practice guidelines and consensus statemen ts on the use of biologic agents in rheumatoid arthritis: A systematic review. Arthritis & Rheumatism , 59(11), 1625–1638. https://doi.org/10.1002/art.24207 Melnyk, B. M., & Fineout-Overholt, E. (2006). Consumer preferences and values as an integral key to evidence-based practice. Nursing Administration Quarterly , 30 (2), 123–127. https://doi.org/10.1097/00006216-200604000-00009 Mims, J. W. (2015). Targeting quality improvement in clinical practice guidelines. Otolaryngology— Head and Neck Surgery , 153 (6), 907–908. https://doi.org/10.1177/0194599815611861 NEJM Catalyst. (2017, January 1). What is patient-centered care? NEJM Catalyst. https://catalyst.nejm.org/doi/abs/10.1056/CAT.17.0559 Neumann, I., Santesso, N., Akl, E. A., Rind, D. M., Vandvik, P. O., Alonso-Coello, P., Agoritsas, T., Mustafa, R. A., Alexander, P. E., Schünemann, H., & Guyatt, G. H. (2016). A guide for health professionals to interpret and use recommendations in guidelines developed with the GRADE approach. Journal of Clinical Epidemiology , 72 , 45–55. https://doi.org/10.1016/ j.jclinepi.2015.11.017 Newhouse, R. P., Pettit, J. C., Poe, S., & Rocco, L. (2006). The slippery slope: Dif ferentiating between quality improvement and research. Journal of Nursing Administration , 36(4), 211–219. https://doi.org/10.1097/00005110-200604000-00011 Nissen, S. E. (2017). Conflicts of interest and professional medical associations: Progress and remaining challenges. JAMA , 317 (17), 1737–1738. https://doi.org/10.1001/jama.2017.2516 Nissen, T., & Wynn, R. (2014). The clinical case report: a review of its merits and l imitations. BMC research notes, 7, 264. https://doi.org/10.1186/1756-0500-7-264 Oermann, M. H., & Hays, J. C. (2016). Writing for publication in nursing (3rd ed.). Springer. Ogrinc, G., Davies, L., Batalden, P., Davidoff, F., Goodman, D., & Stevens, D. (2015). SQUIRE 2.0 . http://www.squire-statement.org Patient-Centered Outcomes Research Institute. (n.d.). Our vision & mission . https://www.pcori.org/ about-us/our-vision-mission Peters, M. D. J., Godfrey, C. M., Khalil, H., McInerny, P., Parker, D., & Soares, C. B. (2015). Guidance for conducting systematic scoping reviews. International Journal of Evidence-Based Healthcare, 13 (3), 141–146. https://doi.org/10.1097/xeb.0000000000000050 Porcino, A. (2016). Not birds of a feather: Case reports, case studies , and single-subject research. International Journal of Therapeutic Massage & Bodywork , 9(3), 1–2. https://doi.org/10.3822/ijtmb.v9i3.334 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition 187 Portela, M. C., Pronovost, P. J., Woodcock, T., Carter, P., & Dixon-Woods, M. (2015). How to study improvement interventions: A brief overview of possible study type s. BMJ Quality and Safety , 24(5), 325–336. https://doi.org/10.1136/bmjqs-2014-003620 Ransohoff, D. F., Pignone, M., & Sox, H. C. (2013). How to decide whether a clinical practice guideline is trustworthy. Journal of the American Medical Association , 309 (2), 139–140. https:// doi.org/10.1001/jama.2012.156703 Sandelowski, M. (2011), “Casing” the research case study. Res. Nurs. Health, 34: 153-159. https:// doi.org/10.1002/nur.20421 Sayre, J. W., Toklu, H. Z., Ye, F., Mazza, J., & Yale, S. (2017). Case reports, case series—From clinical practice to evidence-based medicine in graduate medical educati on. Cureus , 8(8), e1546. https://doi.org/10.7759/cureus.1546 Senn, S. J. (2009). Overstating the evidence—Double counting in met a-analysis and related problems. BMC Medical Research Methodology , 9, 10. https://doi.org/10.1186/1471-2288-9-10 Shekelle, P. G. (2018). Clinical practice guidelines: What’s next? JAMA , 320 (8), 757–758. https:// doi.org/10.1001/jama.2018.9660 Sniderman, A. D., & Furberg, C. D. (2009). Why guideline-making requir es reform. Journal of the American Medical Association , 301 (4), 429–431. https://doi.org/10.1001/jama.2009.15 Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research , 104 , 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039 Spetz, J., Brown, D. S., Aydin, C., & Donaldson, N. (2013). The value of reducing hospital-acqui red pressure ulcer prevalence: An illustrative analysis. Journal of Nursing Administration , 43(4), 235–241. https://doi.org/10.1097/NNA.0b013e3182895a3c Stetler, C. B., Morsi, D., Rucki, S., Broughton, S., Corrigan, B., Fitzgerald, J., Giuliano, K., Havener, P., & Sheridan, E. A. (1998). Utilization-focused integrative reviews i n a nursing service. Applied Nursing Research , 11(4), 195–206. https://doi.org/10.1016/s0897-1897(98)80329-7 Sullivan, T. (2018, May 5). Physicians and industry: Fix the relationships but ke ep them going. Policy & Medicine . https://www.policymed.com/2011/02/physicians-and-industry-fix-the- relationships-but-keep-them-going.html Toronto, C. E., Quinn, B. L., & Remington, R. (2018). Characteristics o f reviews published in nursing literature. Advances in Nursing Science , 41 (1), 30–40. https://doi.org/10.1097/ ANS.0000000000000180 Tunkel, D. E., & Jones, S. L. (2015). Who wrote this clinical practice guideline? Otolaryngology- Head and Neck Surgery , 153 (6), 909–913. https://doi.org/10.1177/0194599815606716 Welch, V. A., Akl, E. A., Guyatt, G., Pottie, K., Eslava-Schmalbach, J., Ansari, M. T., de Beer, H., Briel, M., Dans, T., Dans, I., Hultcrantz, M., Jull, J., Katikireddi, S. V., Meerpohl, J., Morton, R., Mosdol, A., Petkovic, J., Schünemann, H. J., Sharaf, R. N., … Tugwell, P. (2017). GRADE equity guidelines 1: Considering health equity in GRADE guideline develo pment: introduction and rationale. Journal of Clinical Epidemiology , 90, 59–67. https://doi.org/10.1016/j. jclinepi.2017.01.014 Whitehead, D. (2003). Evaluating health promotion: A model for nursing practice. Journal of Advanced Nursing , 41 (5), 490–498. https://doi.org/10.1046/j.1365-2648.2003.02556.x Whittemore, R. (2005). Combining evidence in nursing research: Methods and implications. Nursing Research , 54(1), 56–62. https://doi.org/10.1097/00006199-200501000-00008 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. 7 Evidence Appraisal: Nonresearch 188 Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced Nursing , 52(5), 546–553. https://doi.org/10.1111/j.1365-2648.2005.03621.x Yang, P. H., Hung, C. H., & Chen, Y. C. (2015). The impact of three nursing staffing models on nursing outcomes. Journal of Advanced Nursing , 71 (8), 1847–1856. https://doi.org/10.1111/jan.12643 Yoder-Wise, P. S. (2014). Leading and managing in nursing (6th ed.). Mosby. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828. Created from ucf on 2022-09-10 23:04:58. Copyright © 2021. Sigma Theta Tau International. All rights reserved. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi onals, Fourth Edition Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof Johns Hopkins Evidence-Based Practice Model for Nursing and Healthcare Professionals Hierarchy of Evidence Guide Appendix D Evidence Level Types of Evidence Research Evidence (Appendix E) Level I Experimental study, randomized controlled trial (RCT) Explanatory mixed methods design that includes only a Level I quaNtitative study Systematic review of RCTs, with or without meta-analysis  Level II Quasi-experimental study Explanatory mixed methods design that includes only a Level II quaNtitative study Systematic review of a combination of RCTs and quasi-experimental studies, or quasi-experimental studies only, with or without meta-analysis  Level III Nonexperimental study Systematic review of a combination of RCTs, quasi-experimental and nonexperimental studies, or nonexperimental studies only, with or without meta-analysis.  Exploratory, convergent, or multiphasic mixed methods studies Explanatory mixed methods design that includes only a Level III quaNtitative study QuaLitative study Systematic review of quaLitative studies with or without meta-synthesis Nonresearch Evidence (Appendix F) Level IV Opinion of respected authorities and/or nationally recognized expert committees or consensus panels based on scientific evidence. Includes: Clinical practice guidelines Consensus panels/position statements Level V Based on experiential and non-research evidence. Includes: Scoping reviews Integrative reviews Literature reviews Quality improvement, program or financial evaluation Case reports Opinion of nationally recognized expert(s) based on experiential evidence Note: Refer to the appropriate Evidence Appraisal Tool (Research [Appendix E] or Nonresearch [Appendix F]) to determine quality ratings. © 2022 Johns Hopkins Health System/Johns Hopkins School of Nursing Page | 2 Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof Johns Hopkins Evidence-Based Practice Model for Nursing and Healthcare Professionals Research Evidence Appraisal Tool Appendix E Does this evidence answer the EBP question? ☐ Yes  Continue appraisal ☐ No  STOP, do not continue evidence appraisal Article Summary Information Article Title: Author(s): Number: Population, size, and setting:  Publication date: Complete after appraisal Evidence level and quality rating: Study findings that help answer the EBP question: Article Appraisal Workflow Is this study: ☐ QuaNtitative (collection, analysis, and reporting of numerical data)Numerical data (how many, how much, or how often) are used to formulate facts, uncover patterns, and generalize to a larger population; provides observed effects of a program, problem, or condition. Common methods are polls, surveys, observations, and reviews of records or documents. Data are analyzed using statistical tests. Go to Section I for QuaNtitative leveling ☐ QuaLitative (collection, analysis, and reporting of narrative data) Rich narrative data to gain a deep understanding of phenomena, meanings, perceptions, concepts, and experiences from those experiencing it. Sample sizes are relatively small and determined by the point of redundancy when no new information is gleaned, and key themes are reiterated (data saturation). Data are analyzed using thematic analysis. Often a starting point for studies when little research exists; may use results to design empirical studies. Common methods are focus groups, individual interviews (unstructured or semi-structured), and participation/observations. Go to Section II for QuaLitative leveling ☐ Mixed methods (results reported both numerically and narratively) A study design (a single study or series of studies) that uses rigorous procedures in collecting and analyzing both quaNtitative and quaLitative data. Note: QuaNtitative survey designs with open-ended questions do not meet criteria for mixed methods research because those questions are not approached using strict quaLitative methods. Mixed methods studies provide a better understanding of research problems than using either a quaNtitative or quaLitative approach alone. Go to Section III for Mixed Methods leveling Section I: QuaNtitative Appraisal Is this a report of a single research study? ☐ Yes  Continue to decision tree ☐ No  Go to Section I: B Level Level I studies include randomized control trials (RCTs) or experimental studies Level II studies have some degree of investigator control and some manipulation of an independent variable but lack random assignment to groups and may not have a control group Level III studies lack manipulation of an independent variable; can be descriptive, comparative, or correlational; and often use secondary data Quality After determining the level of evidence, determine the quality of evidence using the considerations below: Does the researcher identify what is known and not known about the problem? ☐ Yes ☐ No Does the researcher identify how the study will address any gaps in knowledge? ☐ Yes ☐ No Was the purpose of the study clearly presented? ☐ Yes ☐ No Was the literature review current (most sources within the past five years or a seminal study)? ☐ Yes ☐ No Was sample size sufficient based on study design and rationale? ☐ Yes ☐ No If there is a control group: Were the characteristics and/or demographics similar in both the control and intervention groups? If multiple settings were used, were the settings similar? Were all groups equally treated except for the intervention group(s)? ☐ Yes ☐ Yes ☐ Yes ☐ No ☐ No ☐ No ☐ N/A ☐ N/A ☐ N/A Are data collection methods described clearly? ☐ Yes ☐ No Were the instruments reliable (Cronbach’s [alpha] > 0.70)? ☐ Yes ☐ No ☐ N/A Was instrument validity discussed? ☐ Yes ☐ No ☐ N/A If surveys or questionnaires were used, was the response rate > 25%? ☐ Yes ☐ No ☐ N/A Were the results presented clearly? ☐ Yes ☐ No If tables were presented, was the narrative consistent with the table content? ☐ Yes ☐ No ☐ N/A Were study limitations identified and addressed? ☐ Yes ☐ No Were conclusions based on results? ☐ Yes ☐ No Section I: QuaNtitative Appraisal (continued) Quality Circle the appropriate quality rating below: A High quality: Consistent, generalizable results; sufficient sample size for the study design; adequate control; definitive conclusions; consistent recommendations based on comprehensive literature review that includes thorough reference to scientific evidence. B Good quality: Reasonably consistent results; sufficient sample size for the study design; some control; fairly definitive conclusions; reasonably consistent recommendations based on fairly comprehensive literature review that includes some reference to scientific evidence. C Low quality: Little evidence with inconsistent results; insufficient sample size for the study design; conclusions cannot be drawn. Record findings that help answer the EBP question on page 1 Section I: QuaNtitative Appraisal (continued) Is this a summary of multiple sources of research evidence? ☐ Yes  Continue to decision tree   ☐ No  Use the Nonresearch Evidence Appraisal tool (Appendix F)   Level Quality After determining level of evidence, determine the quality of evidence using the considerations below: Were the variables of interest clearly identified? ☐ Yes ☐ No Was the search comprehensive and reproducible? Key terms stated Multiple databases searched and identified Inclusion and exclusion criteria stated ☐ Yes ☐ Yes ☐ Yes ☐ No ☐ No ☐ No Was there a flow diagram that included the number of studies eliminated at each level of review? ☐ Yes ☐ No Were details of included studies presented (design, sample, methods, results, outcomes, strengths, and limitations)? ☐ Yes ☐ No Were methods for appraising the strength of evidence (level and quality) described? ☐ Yes ☐ No Were conclusions based on results? Results were interpreted Conclusions flowed logically from the research question, results, and interpretation ☐ Yes ☐ Yes ☐ No ☐ No Did the systematic review include a section addressing limitations and how they were addressed? ☐ Yes ☐ No Section I: QuaNtitative Appraisal (continued) Quality Circle the appropriate quality rating below: A High quality: Consistent, generalizable results; sufficient sample size for the study design; adequate control; definitive conclusions; recommendations consistent with the study’s findings and include thorough reference to scientific evidence B Good quality: Reasonably consistent results; sufficient sample size for the study design; some control; fairly definitive conclusions; recommendations reasonably consistent recommendations based on with athe study’s findings and fairly comprehensive evidence appraisal (vs literature review?) that includes some reference to scientific evidence C Low quality: Little evidence with inconsistent results; insufficient sample size for the study design; conclusions cannot be drawn. Record findings that help answer the EBP question on page 1 Section II: QuaLitative Appraisal Is this a report of a single research study? ☐Yes  This is Level III evidence ☐No  Go to Section II: B After determining level of evidence, determine the quality of evidence using the considerations below: Quality Was there a clearly identifiable and articulated: Purpose? Research question? Justification for design and/or theoretical framework used? ☐ Yes ☐ Yes ☐ Yes ☐ No ☐ No ☐ No Do participants have knowledge of the subject the researchers are trying to explore? ☐ Yes ☐ No Were characteristics of study participants described? ☐ Yes ☐ No Was a verification process used in every step of data analysis (e.g., triangulation, response validation, independent double check, member checking)? (Credibility) ☐ Yes ☐ No Does the researcher provide sufficient documentation of their thinking, decisions, and methods related to the study allowing the reader to follow their decision-making (e.g., how themes and categories were formulated)? (Confirmability) ☐ Yes ☐ No Does the researcher provide an accurate and rich description of findings by providing the information necessary to evaluate the analysis of data? (Fittingness) ☐ Yes ☐ No Does the researcher acknowledge and/or address their own role and potential influence during data collection? ☐ Yes ☐ No Was sampling adequate, as evidenced by achieving data saturation? ☐ Yes ☐ No Does the researcher provide illustrations from the data? If yes, do the provided illustrations support conclusions? ☐ Yes ☐ Yes ☐ No ☐ No Is there congruency between the findings and the data? ☐ Yes ☐ No Is there congruency between the research methodology and: The research question(s) The methods to collect data The interpretation of results ☐ Yes ☐ Yes ☐ Yes ☐ No ☐ No ☐ No Are discussion and conclusions congruent with the purpose and objectives, and supported by literature? ☐ Yes ☐ No Are conclusions drawn based on the data collected (e.g., the product of the observations or interviews)? ☐ Yes ☐ No Section II: QuaLitative Appraisal (continued) Quality Circle the appropriate quality rating below: A/B High/Good Quality: The report discusses efforts to enhance or evaluate the quality of the data and the overall inquiry in sufficient detail; it describes the specific techniques used to enhance the quality of the inquiry. Evidence of at least half or all the following is found in the report: Transparency: Describes how information was documented to justify decisions, how data were reviewed by others, and how themes and categories were formulated. Diligence: Reads and rereads data to check interpretations; seeks opportunity to find multiple sources to corroborate evidence. Verification: The process of checking, confirming, and ensuring methodologic coherence. Self-reflection and self-scrutiny: Being continuously aware of how a researcher’s experiences, background, or prejudices might shape and bias analysis and interpretations. Participant-driven inquiry: Participants shape the scope and breadth of questions; analysis and interpretation give voice to those who participated. Insightful interpretation: Data and knowledge are linked in meaningful ways to relevant literature. C Low quality: Lack of clarity and coherence of reporting, lack of transparency in reporting methods; poor interpretation of data and offers little insight into the phenomena of interest; few, if any, of the features listed for high/good quality. Record findings that help answer the EBP question on page 1 Section II: QuaLitative Appraisal Is this a summary of multiple sources of qualitative research evidence with a comprehensive search strategy and rigorous appraisal method (Meta-synthesis)? ☐Yes  This is Level III evidence ☐ No Use the Nonresearch Evidence Appraisal tool (Appendix F) Quality After determining level of evidence, determine the quality of evidence using the considerations below: Were the search strategy and criteria for selecting primary studies clearly defined? ☐ Yes ☐ No Was there a description of a systematic and thorough process for how data were analyzed? ☐ Yes ☐ No Were methods described for comparing findings from each study? ☐ Yes ☐ No Were methods described for interpreting data? ☐ Yes ☐ No Was sufficient data presented to support the interpretations? ☐ Yes ☐ No Did synthesis reflect: New insights? Discovery of essential features of the phenomena? A fuller understanding of the phenomena? ☐ Yes ☐ Yes ☐ Yes ☐ No ☐ No ☐ No Are findings clearly linked to and match the data? ☐ Yes ☐ No Are findings connected to the purpose, data collection, and analysis? ☐ Yes ☐ No Are discussion and conclusions connected to the purpose, objectives, and (if possible) supported by literature? ☐ Yes ☐ No Did authors describe clearly how they arrived at the interpretation of the findings? ☐ Yes ☐ No Circle the appropriate quality rating below: A/B High/Good Quality: The report discusses efforts to enhance or evaluate the quality of the data and the overall inquiry in sufficient detail; and it describes the specific techniques used to enhance the quality of the inquiry. Evidence of some or all of the following is found in the report: Transparency: Describes how information was documented to justify decisions, how data were reviewed by others, and how themes and categories were formulated. Diligence: Reads and rereads data to check interpretations; seeks opportunity to find multiple sources to corroborate evidence. Verification: The process of checking, confirming, and ensuring methodologic coherence. Self-reflection and self-scrutiny: Being continuously aware of how a researcher’s experiences, background, or prejudices might shape and bias analysis and interpretations. Participant-driven inquiry: Participants shape the scope and breadth of questions; analysis and interpretation give voice to those who participated. Insightful interpretation: Data and knowledge are linked in meaningful ways to relevant literature. C Low quality: Lack of clarity and coherence of reporting, lack of transparency in reporting methods; poor interpretation of data and offers little insight into the phenomena of interest; few, if any of the features listed for high/good quality. Record findings that help answer the EBP question on page 1 Section III: Mixed Methods Appraisal You will need to appraise both parts of the study independently before appraising the study as a whole. Evaluate the quaNtitative part of the study using Section I. Evaluate the qualitative part of the studying using Section II, then return here to complete appraisal. Level Level Quality QuaNtitative Portion QuaLitative Portion The level of mixed methods evidence is based on the sequence of data collection. Quantitative data collection followed by quaLitative (explanatory design) is based on the level of the quaNtitative portion. All other designs (exploratory, convergent, or multiphasic) are Level III evidence. Explanatory sequential designs collected quantitative data first, followed by qualitative. Exploratory sequential designs collect qualitative data first, followed by quantitative. Convergent parallel designs collect quantitative and qualitative data at the same time. Multiphasic designs collect qualitative and quantitative data over more than one phase. Quality After determining the level of evidence, determine the quality of evidence using the considerations below: Was the mixed-methods research design relevant to address both quaNtitative and quaLitative research questions (or objectives)? ☐ Yes ☐ No Was the research design relevant to address the quaNtitative and the quaLitative aspects of the mixed-methods question (or objective)? ☐ Yes ☐ No Circle the appropriate quality rating below: A High quality: Contains high-quality quaNtitative and quaLitative study components; highly relevant study design; relevant integration of data or results; and careful consideration of the limitations of the chosen approach. B Good quality: Contains good-quality quaNtitative and quaLitative study components; relevant study design; moderately relevant integration of data or results; and some discussion of limitations of integration. C Low quality: Contains low quality quaNtitative and quaLitative study components; study design not relevant to research questions or objectives; poorly integrated data or results; and no consideration of limits of integration. Record findings that help answer the EBP question on page 1 ©2022 Johns Hopkins Health System/Johns Hopkins School of Nursing Page | 11
Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof
Johns Hopkins Evidence-Based Practice Model for Nursing and Healthcare Professionals Individual Evidence Summary Tool Appendix G EBP Question: Reviewer name(s) Article number Author, date, and title Type of evidence Population, size, and setting Intervention Findings that help answer the EBP question Measures used Limitations Evidence level and quality Notes to team Directions for use of the Individual Evidence Summary Tool Purpose: Use this form to document and collate the results of the review and appraisal of each piece of evidence in preparation for evidence synthesis. The table headers indicate important elements of each article that will contribute to the synthesis process. The data in each cell should be complete enough that the other team members are able to gather all relevant information related to the evidence without having to go to each source article. See Chapter 11, Lessons from Practice, for examples of completed tools. Reviewer name(s): Record the member(s) of the team who are providing the information for each article. This will provide tracking if there are follow-up items or additional questions on an individual piece of evidence. Article number: Assign a number to each piece of evidence included in the table. This organizes the individual evidence summary and provides an easy way to reference articles. Author, date, and title: Record the last name of the first author of the article, the publication/communication date, and the title. This will help track articles throughout the literature search, screening, and review process. It is also helpful when someone has authored more than one publication included in the review. Type of evidence: Indicate the type of evidence for each source. This should be descriptive of the study or project design (e.g., randomized control trial, meta-analysis, mixed methods, qualitative, systematic review, case study, literature review) and not simply the level on the evidence hierarchy. Population, size, and setting: For research evidence, provide a quick view of the population, number of participants, and study location. For non-research evidence population refers to target audience, patient population, or profession. Non-research evidence may or may not have a sample size and/or location as found with research evidence. Intervention: Record the intervention(s) implemented or discussed in the article. This should relate to the intervention or comparison elements of your PICO question. Findings that help answer the EBP question: List findings from the article that directly answer the EBP question. These should be succinct statements that provide enough information that the reader does not need to return to the original article. Avoid directly copying and pasting from the article. Measures used: These are the measures and/or instruments (e.g., counts, rates, satisfaction surveys, validated tools, subscales) the authors used to determine the answer to the research question or the effectiveness of their intervention. Consider these measures as identified in the evidence for collection during implementation of the EBP team’s project. Limitations: Provide the limitations of the evidence—both as listed by the authors as well as your assessment of any flaws or drawbacks. Consider the methodology, quality of reporting, and generalizability to the population of interest. Limitations should be apparent from the team’s appraisals using the Research and Non-Research Evidence Appraisal Tools (Appendices E and F). It can be helpful to consider the reasons an article did not receive a “high” quality rating because these reasons are limitations identified by the team. Evidence level and quality: Using the Research and Non-Research Evidence Appraisal tools (Appendices E and F), record the level (I-V) and quality (A, B or C) of the evidence. When possible, at least two reviewers should determine the level and quality. Notes to team: The team uses this section to keep track of items important to the EBP process not captured elsewhere on this tool. Consider items that will be helpful to have easy reference to when conducting the evidence synthesis. © 2021 Johns Hopkins Health System/Johns Hopkins School of Nursing Page | 0

Writerbay.net

Do you need academic writing help? Our quality writers are here 24/7, every day of the year, ready to support you! Instantly chat with a customer support representative in the chat on the bottom right corner, send us a WhatsApp message or click either of the buttons below to submit your paper instructions to the writing team.


Order a Similar Paper Order a Different Paper
Writerbay.net