Iain K. Crombie - Evidence in Medicine

Здесь есть возможность читать онлайн «Iain K. Crombie - Evidence in Medicine» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Evidence in Medicine: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Evidence in Medicine»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

High-quality evidence is the foundation for effective treatment in medicine. As the vast amount of published medical evidence continues to grow, concerns about the quality of many studies are increasing. 
 is a much-needed resource that addresses the ‘medical misinformation mess’ by assessing the flaws in the research environment. This authoritative text identifies and summarises the many factors that have produced the current problems in medical research, including bias in randomised controlled trials, questionable research practices, falsified data, manipulated findings, and more. 
This volume brings together the findings from meta-research studies and systematic reviews to explore the quality of clinical trials and other medical research, explaining the character and consequences of poor-quality medical evidence using clear language and a wealth of supporting references. The text suggests planning strategies to transform the research process and provides an extensive list of the actions that could be taken by researchers, regulators, and other key stakeholders to address defects in medical evidence. This timely volume: 
Enables readers to select reliable studies and recognise misleading research Highlights the main types of biased and wasted studies Discusses how incentives in the research environment influence the quality of evidence Identifies the problems researchers need to guard against in their work Describes the scale of poor-quality research and explores why the problems are widespread Includes a summary of key findings on poor-quality research and a listing of proposed initiatives to improve research evidence Contains extensive citations to references, reviews, commentaries, and landmark studies 
 is required reading for all researchers who create evidence, funders and publishers of medical research, students who conduct their own research studies, and healthcare practitioners wanting to deliver high-quality, evidence-based care.

Evidence in Medicine — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Evidence in Medicine», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Multiple imputation makes an assumption about the nature of the missingness of the data. Termed ‘missing at random’, the assumption is that the missing outcomes can be predicted from the other data in the study [60]. It is recommended that sensitivity analysis should be used to explore the effect of assumptions about the missing data [59]. Although not a perfect solution, multiple imputation is better than other methods of dealing with missing data (such as complete case analysis or last observation carried forward) [58].

Modified Intention to Treat

The term modified intention to treat (mITT) is commonly used to describe the analysis of trial data [61, 62]. It has no formal definition [63], but usually involves the deliberate exclusion of some participants from the analysis at some time after randomisation. Patients can be excluded for several reasons: the results of the baseline assessment; results of a post‐baseline assessment; the amount of treatment received; or failure to obtain the outcome measures [63]. Individual trials could employ one or more of these reasons to exclude patients. The impact of mITT on estimates of treatment benefit varies: one review found that, compared to intention to treat (ITT) analyses, the modified method inflated treatment effects [64], whereas another study found no difference between ITT and mITT [65]. The practice of excluding patients after randomisation has been widely criticised because of its potential to introduce bias [62, 66, 67].

OTHER METHODOLOGICAL CONCERNS

Unregistered Trials and Bias

Trial registration ‘was introduced in an effort to reduce publication bias and raise the quality of clinical research’ [68]. Although registration is strongly recommended, a recent study showed that only 53% of trials had had done so [69]. An analysis of over 1,100 trials explored the factors associated with registration. Compared to registered studies, trials that are unregistered are more likely to be of lower methodological quality. For example, they are less likely to have a defined primary outcome (48% vs 88%), more likely to have not reported or inadequate allocation concealment (76% vs 55%), less likely to report whether or not blinded (32% vs 15%), and more likely not to report details of attrition (67% vs 29%) [70]. When adjusted for methodological weaknesses, the unregistered trials had a modest increase in the average effect size compared to the registered studies. Another study evaluated 322 trials, and also showed a similar modest effect on treatment effect estimates [71]. Unregistered trials may give biased estimates of treatment effect.

Small Studies

Small trials often give misleading estimates of treatment benefit. Several review studies, each of which examined hundreds of trials, have shown that, on average, small trials report greater effect sizes than larger ones [72–74]. Two explanations have been suggested for this finding: small studies with negative findings may be less likely to be published, and small studies may be of poorer methodological quality and more prone to bias [73]. Most likely both factors contribute to the bias.

A related phenomenon is that unusually large treatment effects are most commonly reported by small trials [75]. These often occur in the first trial of a new treatment, with subsequent trials showing much smaller effects [76–78]. A possible explanation for this is that small studies are much more likely to be influenced by the play of chance [79]. A few more events (e.g. deaths) in one treatment group, or a few less in the other, can have a large effect on small studies. When averaged across many trials, chance effects cancel out, but for an individual study it can generate large, misleading effect sizes.

The fragility index is used to identify just how susceptible statistically significant results are to the play of chance [80]. It measures how many fewer events would have to occur to change a significant treatment effect to a non‐significant one. Reviews have found that for many trials the index is one i.e. if one patient had a different outcome the finding would not be statistically significant [81, 82]. In general, the smaller the value of the index, the more fragile the study. Several reviews of trials have reported the median values of the index of 1, 2, 3 and 4 [82–84], indicating that, for half of the trials included in these reviews, a different outcome in a few patients would change the statistical significance. Other reviews have found slightly larger median fragility indices of 5 and 8 [80, 85].

Low Power

Small studies are often referred to as having low power. In medical research, statistical power refers to the chances (probability) that a study will detect a significant effect of treatment if one truly exists. A power of 80% is recommended, but few trials in medicine achieve this: in an overview of 136,000 trials only 9% of those published between 2010 and 2014 did so [86].

A consequence of low power is that spuriously significant results are more likely to occur [87]. Another problem is that, if there is a real benefit of treatment, small studies are unlikely to detect the benefit as being statistically significant. These apparently conflicting statements are true because chance is even‐handed; it will make some interventions appear to have a larger effect size than they do in reality, and will sometimes make the effect seem smaller than it really is [79]. Inflated effect sizes are more likely to be significant and reduced ones less so. The result is that small trials have much more heterogeneous effect sizes than large ones [88].

To ensure that the trial has adequate power, researchers should carry out a formal sample size calculation (specifying the likely size of the treatment effect as well as the required power and an estimate of variance). The frequency of reporting sample size calculations is often low. For example, only 41% of low back pain trials [89] and 35% neurosurgical trials [90] reported this calculation. Even when the sample size calculation is reported, researchers often overestimate the possible benefit of the treatment and end up with sample sizes that are too small to detect a clinically realistic effect [89, 91].

Industry‐Funded Trials

The research funded by drug companies ‘produces innovative, life‐saving medicines’, but it also has a darker side [92]. A common finding is that trials funded by the pharmaceutical industry are more likely to report that the treatment was beneficial [93, 94]. One review study concluded that ‘pharmaceutical company sponsorship is strongly associated with results that favour the sponsor's interests’ [95]. Another author commented that ‘big pharma is more concerned about commercial interests than patient harm’ [96]. Editors of leading medical journals, and reviews by academic researchers, have heavily criticised the pharmaceutical industry for sponsoring trials whose results ‘favour the sponsors' interests’ [93, 97, 98]. As industry fund the majority of large international trials, and is particularly good at disseminating their findings [99], any bias in their studies could create serious problems for evidence. Many industry‐funded trials involve collaboration between industry and academia, although academics are often not involved in the data analysis [100]. An evaluation of the research practices of the pharmaceutical industry led to the conclusion that ‘those who have the gold make the evidence’ [101].

In general drug company studies are at not at a higher risk of bias in methods than other trials [93], so the explanation for the higher frequency of positive results must lie elsewhere. Other possibilities have been suggested. These include the more frequent use of surrogate outcomes, and publication bias, in which trials with negative findings are not published [93, 102, 103]. An analysis of head‐to‐head trials (which directly compare drug versus drug, rather than drug versus placebo) showed that 96.5% of trials favoured the drug manufactured by the company funding the study [104]. This may provide evidence of industry manipulation. An analysis of internal documents from the industry found that suppression of negative studies and spinning of negative findings were recognised techniques [105]. As one author suggested, allowing drug companies to generate evidence ‘is akin to letting a politician count their own votes’ [103].

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Evidence in Medicine»

Представляем Вашему вниманию похожие книги на «Evidence in Medicine» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Evidence in Medicine»

Обсуждение, отзывы о книге «Evidence in Medicine» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x