The solution is to ensure those involved in the trial have no access to the randomisation sequence, a process called allocation concealment. To achieve this, the randomisation is commonly handled by a remote site, such as a clinical trials unit, which could, for example, provide the treatments in separate containers labelled A or B. The methods used to conceal treatment allocation should be clearly described in the trial report, to give reassurance that bias is unlikely to have occurred.
Overviews of trials have shown that inadequate or poorly reported methods of allocation concealment are common. A major review of over 20,000 trials found that allocation concealment was adequate in only 35% of trials [5]. Other overviews in different clinical specialties found that the process was adequately described in 53% of neurological trials [8], 43% of surgical trials [3] and 27% of trials in multiple sclerosis [9]. An extreme example comes from the field of oral health in which only 15% of trials had low risk of bias for allocation concealment [10]. These studies suggest that as many as two third of trials are at risk of producing biased estimates of treatment.
Evidence that the Randomisation Process Is Subverted
Randomisation will usually produce two treatment groups with similar sample sizes, but only rarely will they have identical numbers of patients. A review found that many more trials had groups with identical sample sizes than could occur by chance [11]. The conclusion is that someone may have modified the randomisation sequence to prevent disparity in the size of the two groups, an action described as ‘forcing cosmetic credibility’ [11].
Randomisation should also result in the two groups being similar at baseline on characteristics such as age and clinical signs and symptoms. Small differences between groups commonly occur, but many trials have much more marked imbalances between groups than would be expected by chance. This has been documented for participant age [12] and for important clinical predictors of outcome [13]. These imbalances suggest that the randomisation sequence has been altered, with the likely consequence that the estimates of treatment effect will be biased.
A few studies have explored whether researchers admit to deciphering the randomisation sequence. Schulz and colleagues found that many clinicians try to decode the sequence [14]. Paludan‐Muller and colleagues [15] reviewed surveys of the reasons why clinicians do this. The most common reasons were that a doctor had a preference for a treatment for a particular patient, or had a desire to show that the new treatment was effective. Some researchers admitted to distorting the randomisation sequence by entering two or more patients at the same time, so that a particular patient could be allocated to a preferred treatment [15]. In some trials the treatment allocation codes are delivered in sealed envelopes [16], enabling some clinicians to subvert the randomisation by opening the envelopes before entering the patients [15]. Whatever the method of manipulation, deviations from random allocation could lead to bias.
Does Integrity of Allocation Concealment Matter?
A seminal paper by Schulz and colleagues in 1995 showed that studies that reported inadequate concealment treatment allocation exaggerate the estimates of treatment benefit [17]. The initial overview evaluated 250 trials of interventions in pregnancy and childbirth. Since then this finding has been replicated and extended by two overview studies covering thousands of trials across all areas of medicine [7, 18]. Their finding is that poor or inadequately described methods of allocation concealment lead to exaggerated treatment effects.
PROBLEMS IN MEASURING THE OUTCOME
The effectiveness of a treatment is assessed by comparing the health status of those in the intervention and control groups at the end of the trial. In many clinical settings the effect of treatment could be measured by several different outcomes. For example, in cancer trials possible outcomes would include the average survival time, disease‐free survival or quality of life. Commonly, one outcome measure is designated the primary outcome, with the other outcomes being termed secondary measures. (This is to prevent researchers from analysing many different outcomes, then highlighting the one which looks best.) Selecting the primary outcome involves difficult choices, but it greatly simplifies the interpretation of the results.
Switching Primary Outcomes
Before recruiting patients, many trials report their detailed methods in an international trial register. Several international registers have been established (e.g. ClinicalTrials.gov and the ISRCTN registry) [19, 20]. Many researchers also publish their study protocols in a medical journal. These sources allow other researchers to compare the outcome measures that were initially specified with those that are presented in the publication of the trial results. A review of outcome reporting in high quality neurology journals found that in 180 trials, 21% of the specified primary outcomes had been omitted, 6% of primary outcomes were demoted to secondary outcomes and 34% of trials added previously unmentioned primary outcomes [21]. A similar pattern was seen in trials published in haematology journals where 40% of primary outcomes had been omitted, 25% of primary outcomes were demoted and many new outcomes were added [22]. The evidence is clear that in trials across the medical specialties, primary outcomes are frequently changed [23–26].
Outcomes may be changed for good reasons, such as replacing a difficult to measure outcome with a more amenable one. But there may be other motives. Several studies have shown that the effect of substituting outcomes favours the publication of positive findings [21, 26]; that is when a non‐significant primary outcome is demoted and a significant secondary one is promoted to primary outcome. Compared to trials with unchanged outcomes, those with substituted outcomes report an increased effect size [27].
One study explored why authors had omitted or changed outcomes [28]; often this was because the researchers thought that a non‐significant result was not interesting. A review of such studies found that a preference for positive finding, and a poor or flexible research design, were the reasons most commonly mentioned for switching outcomes [29]. It seems likely that outcomes are sometimes changed based on the findings from an initial analysis.
Blinding of Outcome Assessment
A long‐standing feature of trials is that the patient, and the person who measures patient status (the outcome) at the end of the trial, should be unaware of (blinded to) the treatment the participants received. This ensures that knowledge of treatment group does not influence the way the outcome is measured.
In many trials the method of blinding outcome assessment is poor. An evaluation of 20,920 trials included in Cochrane systematic reviews found that 31% of trials had unclear risk of blinding of participants and a further 33% were at high risk of bias [5]. For outcome assessment, 25% were at unclear risk of and 23% were at high risk of bias [5].
Another concern is whether blinding is compromised. This can happen when an intervention is sufficiently different from the control (e.g. by taste) that the patient identifies which treatment they have been given, and reveals this to the outcome assessor [30]. Few studies report whether they have assessed the risk that unblinding has occurred [31]. However one study that contacted the authors of published trials found that 43% has assessed this risk without reporting it, and that in 11% of studies it was likely that blinding had been compromised [30].
Читать дальше