Friday, September 25, 2009

"AIDS Vaccine" Story Gets Booster from Major News Outlets

Reuters reports that a test of an "AIDS vaccine" in Thailand has been shown "31.2% effective" in preventing HIV infection. The report also describes these results as "inconclusive," while the AP adds that 50% was the measure of success the study was designed for, to demonstrate "clear benefit." In other words, this vaccine cocktail of two proven-worthless vaccines has not demonstrated any clear benefit at all.

These are contradictions that need to be addressed editorially, and no report I have found yet does so. Even those reports that have now updated their boosterish headlines include reporting that assumes a cause-and-effect relationship from vaccine to prevention. The latest Reuters hed even rounds up (the miscalculated, see below) 31.2% to an even sexier (and 7.25% larger than truth) "1/3". Donald G McNeil Jr., at the NY Times, told me (see update below) that his editors did the same thing to his bylined story on the vaccine trial. (
The most recent version of his story has been updated to remove all implications of agreement with the researchers' overreaching conclusions.)

The exaggeration rampant in these reports (McNeil's excepted) is crazy boosterism, not responsible reporting. The editorial responsibility here is to be more cautious than the enthusiastic researchers themselves (plus others caught off-guard for a snap response on the phone). And that's where it's all gone wrong. No one has evaluated the numbers as numbers in context. And no one should be using unmodified verbs like "prevented" and "blocked" to describe these results until that analysis is complete.

In that spirit, my very limited, preliminary whack here, hoping to prod better qualified others':

Of about 16,400 normal-risk 18-30-year-old Thai volunteers in the study (by the US Army, Thai health officials and others), half received a vaccine cocktail of two previously ineffective vaccines, while half received a placebo. The trial was based on a "hunch" by researchers, not any previous evidence of effectiveness. Of the two groups, 51 and 74 study participants, respectively, became infected with HIV. Real people, divided in two groups beforehand. Not a randomized look at a randomized sample from a greater population.

I stress the sketchy parts, because they all undermine the pure numbers involved.

Col. Jerome H. Kim, MD, "manager of the army's HIV vaccine program" (per NYT) is quoted in all reports claiming "31.2% effectiveness" based on that difference of 23 from the placebo group's 74 infected.

First of all, 23/74= 31.08%! But Dr. Kim's 31.2 has been picked up by the New York Times, Bloomberg, and dozens of other news outlets. Anybody got a calculator? Remember long division?

The far greater problem is that while the study size was large, the differential of 23 is tiny. Twenty-three is an awfully small number to derive three decimal places from. So, how small would that number have to have been to be considered insignificant? The expert I spoke to today suggested that a difference of 17 might have been small enough to be statistically meaningless. Seventeen versus 23 is an awfully thin margin when talking about actual study participants and their individually unique high-risk behavior.

Not to mention proving that a vaccine works.

But it gets worse: The initial Times report mentioned,







The most confusing aspect of the trial, Dr. Kim said, was that everyone who did become infected developed roughly the same amount of virus in their blood whether they got the vaccine or a placebo.


Normally, any vaccine that gives only partial protection — a mismatched flu shot, for example — at least lowers the viral load.

So the vaccine seems to act as if it doesn't work at all. How about that!

Not only that, previous studies of each of the drugs showed a possible increase in HIV infection within the non-placebo groups. So not only do we have odd, scientifically questionable results, they come with tiny numbers, erroneously calculated . . . and reported as if they provided concrete evidence of effectiveness.

No.

My expert was Prof. Joel Levine, chair of the department of Mathematics in Social Sciences at Dartmouth College (and, full disclosure, with whom I'm preparing a textbook on statistics). He did find the numbers potentially significant, but noted that by "quick and dirty" calculation, the standard deviation from the 74 infected folks would be about 56.8 through 91.2. In which case, 51 becoming infected while on the vaccine represent only six fewer than what might generally be expected randomly, a concept which, he carefully notes, "would depend on convention, no more (no less)," and, again, does not account for the peculiarities of high-risk conduct while contracting HIV.

57? Statistical noise. But 51 infected while on the vaccine proves it sometimes works? Not possible.

Prof. Levine stresses that statistical analysis is as much art as science, which can put a damper on the cause and effect behind 31.08% fewer infected study participants in the vaccine group. Different behaviors? Different meds? Poor sorting of the placebo/vaccine groups? Who knows? Reporting as if the cocktail caused the lowered numbers is akin to magical thinking, regardless of the enthusiasm and spin of very interested sponsors of the massive trial.

Are these results statistically significant enough for further investigation? Of course. Why not? But clearly, the cocktail is not effective enough. Can it be shown the drugs "prevented," "blocked," and "cut" infections, as reported? Sorry. No. No matter how many people shout it's so.

Given the the buckshot nature of statistical sampling with actual pre-selected humans (and their randomly distributed high-risk behaviors), to agree editorially with Dr. Kim and assert any "effectiveness" jumps well beyond the scope and defensible conclusions of this study. And let's not forget: The AP reports that the threshold of "clear benefit," beforehand, was to be established at 50% prevention.

That's probably where it should stay, despite today's exciting and suggestive numbers, and researchers' new suggestions that a vastly lower threshold demonstrates a clear benefit as well.


UPDATE: After composing this post, I spoke by phone with Donald McNeil, the author of the NY Times report (who contacted me by email in response to my notes to Senior Editor Greg Brock). He asked me precisely some of the questions here, particularly ("from one scientist--not a biostatistician"), "How many more infections would it have taken to make the difference statistically meaningless?" I have updated, above, to reflect changes he's made to his initial story.

He says he expects his hard look at these numbers to run in Sunday's Times. Bravo.



SUNDAY TIMES UPDATE: It's the lead story on the Times website, the cover piece of the Week in Review. Not surprisingly (as it closely follows this blog post!), I think he got it right. I'd rather be quoted, but glad to be recognized as a blogger "with a taste for biostatistics".

No comments: