Allen Rubin - Practitioner's Guide to Using Research for Evidence-Informed Practice

Здесь есть возможность читать онлайн «Allen Rubin - Practitioner's Guide to Using Research for Evidence-Informed Practice» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Practitioner's Guide to Using Research for Evidence-Informed Practice: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Practitioner's Guide to Using Research for Evidence-Informed Practice»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

The latest edition of an essential text to help students and practitioners distinguish between research studies that should and should not influence practice decisions 
Practitioner's Guide to Using Research for Evidence-Informed Practice
Practitioner's Guide to Using Research for Evidence-Informed Practice, Third Edition
Practitioner's Guide to Using Research for Evidence-Informed Practice

Practitioner's Guide to Using Research for Evidence-Informed Practice — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Practitioner's Guide to Using Research for Evidence-Informed Practice», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

In an SR, independent and unbiased researchers carefully search for every published and unpublished report available that deals with a particular answerable question. These reports are then critically analyzed, and – whether positive or negative, whether consistent or inconsistent – all results are assessed, as are factors such as sample size and representativeness, whether the outcome measures were valid, whether the interventions were based on replicable protocols or treatment manuals, what the magnitude of observed effects were, and so forth. (p. 173)

Although systematic reviews often will include and critically analyze every study they find, not just randomized experiments, they should give more weight to randomized experiments than to less controlled studies in developing their conclusions. Some systematic reviews, such as those registered with the Campbell or Cochrane collaborations, require researchers to meet strict standards related to methods used to find studies and quality standards for the studies that will or will not be included in the review itself.

A more statistically oriented type of systematic review is called meta-analysis . Meta-analyses often include only randomized experiments, but sometimes include quasi-experimental designs and other types of studies as well. The main focus of meta-analysis is to aggregate the statistical findings of different studies that assess the effectiveness of a particular intervention. A prime aim of meta-analysis is to calculate the average strength of an intervention's effect by aggregating the effect strength reported in each individual study. Meta-analyses also can assess the statistical significance of the aggregated results. When meta-analyses include studies that vary in terms of methodological rigor, they also can assess whether the aggregated findings differ according to the quality of the methodology. The most powerful approach to a systematic review is the combination of the rigorous and transparent searching methods, clear criteria for inclusion an exclusion of selected studies, and statistical aggregation of data.

Some meta-analyses will compare different interventions that address the same problem. For example, a meta-analysis might calculate the average strength of treatment effect across experiments that evaluate the effectiveness of exposure therapy in treating PTSD, then do the same for the effectiveness of eye movement desensitization and reprocessing (EMDR) in treating PTSD, and then compare the two results as a basis for considering which treatment has a stronger impact on PTSD.

You can find some excellent sources for unbiased systematic reviews and meta-analyses in Table 2.2in Chapter 2. Later in this book, Chapter 8examines how to critically appraise systematic reviews and meta-analyses. Critically appraising them is important because not all of them are unbiased or of equal quality. It is important to remember that to merit a high level on the evidentiary hierarchy, an experiment, systematic review, or meta-analysis needs to be conducted in an unbiased manner. In that connection, what we said earlier about Table 3.1is very important, and thus merits repeating here:

This hierarchy assumes that each type of study is well designed. If not well designed, then a particular study would merit a lower level on the hierarchy.

For example, a randomized experiment with egregiously biased measurement would not deserve to be at Level 3 and perhaps would be so fatally flawed as to merit dropping to the lowest level. The same applies to a quasi-experiment with a severe vulnerability to a selectivity bias.

3.3.5 Matrix of Research Designs by Research Questions

As we have discussed, different research designs are better suited for certain types of EIP questions. As an alternative to presenting multiple research hierarchies, some researchers have represented the fit between various EIP questions and the research designs most equipped to answer those questions with a matrix to emphasize that multiple designs can be used for different types of questions, but that some designs are stronger than others depending on the question type. Table 3.2is an example of such a matrix that we've created using the four EIP questions and the study designs that we have briefly explored in this chapter.

The larger checks indicate the study designs that are best suited to answer each EIP question, and the smaller checks indicate designs that are less well suited, or less commonly used, to answer each EIP question. Those designs without a large or small check do not provide good evidence to answer the EIP question. For example, when answering questions about factors that predict desirable and undesirable outcomes, correlational studies are the study design of choice and therefore are marked with a large check. Also, systematic reviews that combine the results of these types of studies to answer prognosis and risk questions are also marked with a large check. Sometimes the results of experimental or quasi-experimental studies are used to determine risk and prognosis as well, especially if they are large studies that collect a lot of data about the participants in order to look at factors related to risks and benefits, not just the treatments received. Therefore, each of these designs is marked with a small check. Qualitative studies and single-case designs are neither well suited to answering risk and prognosis questions nor commonly used for this purpose, so these are not marked with a check.

TABLE 3.2 Matrix of Research Designs by Research Questions

Qualitative Experimental Quasi-Experimental Single Case Correlational Systematic Reviews or Meta-analyses
What factors predict desirable and undesirable outcomes?
What can I learn about clients, service delivery, and targets of intervention from the experiences of others?
What assessment tools should be used?
What intervention, program or policy has the best effects?

You should keep in mind that this is not an exhaustive list of types of research studies you might encounter and that this table assumes that these study designs are executed with a high level of quality. As you read on in this book, you'll learn a lot more about these and other study designs and how to judge the quality of the research evidence related to specific EIP questions.

3.3.6 Philosophical Objections to the Foregoing Hierarchy: Fashionable Nonsense

Several decades ago, it started to become fashionable among some academics to raise philosophical objections to the traditional scientific method and the pursuit of logic and objectivity in trying to depict social reality. Among other things, they dismissed the value of using experimental design logic and unbiased, validated measures as ways to assess the effects of interventions.

Although various writings have debunked their arguments as “fashionable nonsense” (Sokal & Bricmont, 1998), some writers continue to espouse those arguments. You might encounter some of their arguments – arguments that depict the foregoing hierarchy as obsolete. Our feeling is that these controversies have been largely laid to rest, so we only briefly describe some of these philosophical controversies here.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Practitioner's Guide to Using Research for Evidence-Informed Practice»

Представляем Вашему вниманию похожие книги на «Practitioner's Guide to Using Research for Evidence-Informed Practice» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Practitioner's Guide to Using Research for Evidence-Informed Practice»

Обсуждение, отзывы о книге «Practitioner's Guide to Using Research for Evidence-Informed Practice» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x