Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data
Names: Rubin, Allen, author. | Bellamy, Jennifer, author.
Title: Practitioner’s guide to using research for evidence-informed practice / Allen Rubin, Jennifer Bellamy.
Other titles: Practitioner’s guide to using research for evidence-based practice Description: Third edition. | Hoboken, NJ : Wiley, 2022. | Preceded by Practitioner’s guide to using research for evidence-based practice / Allen Rubin and Jennifer Bellamy. 2nd ed. c2012. | Includes bibliographical references and index.
Identifiers: LCCN 2021041536 (print) | LCCN 2021041537 (ebook) | ISBN 9781119858560 (paperback) | ISBN 9781119858577 (adobe pdf) | ISBN 9781119858584 (epub)
Subjects: MESH: Evidence-Based Practice | Social Work | Evaluation Studies as Topic | Outcome and Process Assessment, Health Care
Classification: LCC RC337 (print) | LCC RC337 (ebook) | NLM WB 102.5 | DDC 616.89/140072—dc23 LC record available at https://lccn.loc.gov/2021041536
LC ebook record available at https://lccn.loc.gov/2021041537
Cover Design: Wiley
Cover Image: © Dilen/Shutterstock
Approximately a decade has elapsed since the second edition of this book was published. During that time there have been some important developments pertaining to evidence-informed practice (EIP). Those developments spurred us to write a new, third edition of our book. One such development was the preference to replace the term evidence-based practice (EBP) with the term EIP. We changed our title to conform to that change, and in Chapter 1, we explain why the latter term is preferred. The development of effective vaccines to fight the COVID-19 pandemic of 2020–2021 provided an example we could cite at the beginning of this book that we hope will help readers shed any ambivalence that they may have had about the relevance of research to helping people.
Another significant change is the growing commitment among social work and other human service practitioners to address social justice issues. Racial injustice, in particular, has become a key focus in our missions, especially in the aftermath of the recent police murders of innocent Black people. Consequently, we added a chapter that focuses exclusively on social justice and how to take an EIP approach to pursuing it. In fact, we have added attention to that issue in our first chapter, which now includes a section on Black Lives Matter and how President Barack Obama took an EIP approach when formulating his policy position regarding how to effectively reduce incidents of police misconduct and violence.
Yet another recent development has been the recognition of how rarely practitioners are able to evaluate their practice with designs that meet all of the criteria for causal inferences. Consequently, we have added much more attention to the degree of certainty needed when making practice decisions when evidence sufficiently supports the plausibility of causality to imply practice and policy decisions when some, but not all, of the criteria for inferring causality are met. In that connection, we have added content on the use of within-group effect-size benchmarks, which can be used to evaluate how adequately practitioners or agencies are implementing evidence-supported interventions.
Organization and Special Features
Part Icontains three chapters that provide an overview of evidence-informed practice (EIP) that provide a backdrop for the rest of the book.
Chapter 1introduces readers to the meaning of EIP, its history, types of EIP questions, and developing an EIP outlook. New material includes a section on research ethics and a section on EIP regarding social justice and Black Lives Matter.
Chapter 2covers the steps in the EIP process, including new material on strategies for overcoming feasibility obstacles to engaging in the EIP process.
Chapter 3delves into research hierarchies and philosophical objections to the traditional scientific method, including a critical look at how some recent politicians have preferred their own “alternative facts” to scientific facts that they did not like.
Part IIcontains five chapters on critically appraising studies that evaluate the effectiveness of interventions.
Chapter 4covers criteria for making causal inferences, including material on internal validity, measurement issues, statistical chance, and external validity. Major new additions to this chapter include sections on inferring the plausibility of causality and the degree of certainty needed in making EIP decisions when ideal experimental outcome studies are not available or not feasible. To illustrate that content we have added two more study synopses to Chapter 4. Another significant change to this chapter was the removal of several pages on statistical significance, which we moved to a new, penultimate chapter on data analysis. We felt that the removed pages delved too far in the weeds of statistical significance for this early in the book, and thus might overwhelm readers.
Chapter 5helps readers learn how to critically appraise experiments. We were happy with this chapter and made only some minor tweaks to it.
Chapter 6, on critically appraising quasi-experiments, also had few changes, the main one being more attention to the potential value of pilot studies regarding the plausibility of causality in regard to the degree of certainty needed in making practice decisions.
Chapter 7, on critically appraising time-series designs and single-case designs, has been tweaked in various ways that we think will enhance its value to readers. For example, we added several examples of time series studies to evaluate the impact of police reform policies aiming to reduce incidents of police violence.
Chapter 8examines how to critically appraise systematic reviews and meta-analyses. The main changes in this chapter include increased coverage of odds ratios and risk ratios.
Part IIIcontains two chapters on critically appraising studies for alternative EIP questions.
Chapter 9does so regarding nonexperimental quantitative studies, including surveys, longitudinal studies, and case-control studies. A new addition to this chapter discusses how some survey results can have value even when based on nonprobability samples.
Chapter 10describes qualitative research and frameworks to critically appraise qualitative studies. Additional details on qualitative methods as well as alternative frameworks to grounded theory has been added to this chapter.
Part IVcontains two chapters on assessment and monitoring in EIP.
Chapter 11covers critically appraising, selecting, and constructing assessment instruments. In our previous edition, this chapter looked only at appraising and selecting instruments. New in this edition is a section on constructing instruments.
Chapter 12covers monitoring client progress. New in this edition is more attention to factors that impair the ability of practitioners in service-oriented settings to implement evidence-supported interventions with adequate fidelity and a new section on the use of within-group effect size benchmarks to evaluate that adequacy.
Part Vcontains two new chapters on additional aspects of EIP now fully covered in the previous sections.
Chapter 13explains how to appraise and conduct data analysis in the EIP process. Some of the material in this chapter was moved from the previous edition's Chapter 4. Other material appeared in an appendix on statistics in the previous edition. A major new section, which did not appear in our previous edition, shows how to calculate within-group effect sizes and compare them to benchmarks derived from meta-analyses of randomized clinical trials (RCTs) that can show practitioners and agencies whether their treatment recipients appear to be benefiting from treatment approximately as much as recipients in the RCTs.
Читать дальше