Monday, September 19, 2011

Statistical considerations and psychological effects in clinical trials

I find it illuminating to read statistics "bibles" in various fields, which not only open my eyes to different domains, but also present the statistical approach and methods somewhat differently and considering unique domain-specific issues that cause "hmmmm" moments.

The 4th edition of Fundamentals of Clinical Trials, whose authors combine extensive practical experience at NIH and in academia, is full of hmmm moments. In one, the authors mention an important issue related to sampling that I have not encountered in other fields. In clinical trials, the gold standard is to allocate participants to either an intervention or a non-intervention (baseline) group randomly, with equal probabilities. In other words, half the participants receive the intervention and the other half does not (the non-intervention can be a placebo, the traditional treatment, etc.) The authors advocate a 50:50 ratio, because "equal allocation is the most powerful design". While there are reasons to change the ratio in favor of the intervention or baseline groups, equal allocation appears to have an important additional psychological advantage over unequal allocation in clinical trials:
Unequal allocation may indicate to the participants and to their personal physicians that one intervention is preferred over the other (pp. 98-99)
Knowledge of the sample design by the participants and/or the physicians also affects how randomization is carried out. It becomes a game between the designers and the participants and staff, where the two sides have opposing interests: to blur vs. to uncover the group assignments before they are made. This gaming requires devising special randomization methods (which, in turn, require data analysis that takes the randomization mechanism into account).

For example, to assure an equal number of participants in each of the two groups, given that participants enter sequentially, "block randomization" can be used. For instance, to assign 4 people to one of two groups A or B, consider all the possible arrangements AABB, AABA, etc., then choose one sequence at random, and assign participants accordingly. The catch is that if the staff have knowledge that the block size is 4 and know the first three allocations, they automatically know the fourth allocation and can introduce bias by using this knowledge to select every fourth participant.

Where else does such a psychological effect play a role in determining sampling ratios? In applications where participants and other stakeholders have no knowledge of the sampling scheme this is obviously a non-issue. For example, when Amazon or Yahoo! present different information to different users, the users have no idea about the sample design, and maybe not even that they are in an experiment. But how is the randomization achieved? Unless the randomization process is fully automated and not susceptible to reverse engineering, someone in the technical department might decide to favor friends by allocating them to the "better" group...


Yossi Levy said...

A few comments.

Regarding unequal allocation – when the sample size is fixed, it is true that equal allocation usually maximizes the sample size (at least when the distribution of the test statistic under the null hypothesis is symmetric). However, if an unequal allocation is desired, it is easy to calculate the sample size that will provide the desired power.

There are many cases in which it is known a priori that one treatment is preferred over the other. A "superiority over placebo" design is the obvious example. Therefore, an unequal randomization ratio might be a good in such cases. It does not affect the validity of the results as long as the blinding is kept.

Regarding block randomization: if the staff know the first three allocation somehow, then the trial sponsor has a very serious issue to worry about, and fact that the staff can know the forth allocation if they know the block size is almost a non-issue. A little more serious issue regarding block randomization us that if the staff know the allocation of one subject (and this is possible, in case of unblinding due to a serious adverse event, for example), and they also know the block size, they can now use Bayes theorem and update the probabilities that the rest of the subject in the block are on treatment A. For this reason the block size is kept in secret along with the whole randomization scheme. Also, we, pharmaceutical statisticians, relay on the fact that most medical staff cannot apply Bayes theorem properly…

Yossi Levy
The princess of science

Galit Shmueli said...

Thanks for these comments Yossi - Indeed, the book also discusses allocation for bioequivalence and other types of goals. I find it illuminating to see so many issues in clinical trials that are not mentioned in other contexts, yet perhaps they should!

There is also a fascinating discussion about practical hindrances to blinding, such as taste, color, or other features of the medicine (and participants who compare while in the waiting room!) After reading the whole discussion about the gaming by medical staff and participants who try to uncover the blinding -- which makes total sense -- who wants not to know? I am seriously wondering whether hiding information is a good idea. The Bayesian approach, where you take into account prior information rather than "erasing" people's mind, seems more ethical and more humane. Of course, I can see the point about sacrificing individuals' welfare for the benefit of the larger society, but still, hiding knowledge doesn't sound right (practically and ethically). I'd love to get your comments on this issue.

ronkenett said...

Blocks are usually defined in much smaller sizes than necessary. For example, in a comparative trial with 12 patients per center, a block size of 12 would be adequate for balancing treatment versus placebo. Common practice however is to set up blocks of size 4, thus increasing the 'psychological' and uncovering risks mentioned above.

this emphasizes the importance of running a comprehensive risk analysis of a trial design.

who should conduct this analysis: the sponsor, the CRO, the data monitoring committee,....?

Galit Shmueli said...

Thanks Ron -- you are suggesting improved gaming, but my main concern is that "the incentives are not aligned" for the researchers, participants and physicians. Although I have never been directly involved in a clinical trial, I am wondering if there are "better aligned" solutions.