Cold Confusion: Alexandros Marinos unpacks the TOGETHER Trial with Bret
Bret speaks with Alexandros Marinos, who has analyzed the TOGETHER trial, which aims to identify effective repurposed therapies to prevent the disease progression of COVID-19.Alexandros is the Founder/CEO of http://balena.io & aspiring practical philosopher. Find him on:https://twitter.com/alexandrosmhttps://doyourownresearch.substack.com/Mentioned in this episode:https://www.togethertrial.com/https://doyourownresearch.substack.com/p/the-problem-with-the-together-trial*****Find Bret Weins...
In fact, Mills, the principal investigator, in his email to Steve Kirsch said, I actually think the result is positive and it shows a 17% reduction in hospitalization.
And if we had only randomized a few more patients, I believe that it would have come out significant.
So this is literally his quote.
This is stunning.
The principal investigator on this trial, this trial, which has been heralded from the rooftops as suggesting that this drug has no effect.
The principal investigator believes that the trial showed that it was not effective because they didn't have enough patients.
He believes they saw an effect.
Yep.
Right?
And that if they had randomized more patients that that effect would have been such that the conclusion would have been different.
That is an amazing thing to be true coming from the primary author on a paper that the Wall Street Journal is telling us says ivermectin doesn't work.
Hey, folks, welcome to the Dark Horse Podcast.
I am Dr. Brett Weinstein, and I am sitting today with Alexandros Marinos, who is the CEO of Belena, who has been doing a deep dive on the recent or recently revealed together trial ivermectin arm.
We are going to talk through that work today.
Welcome, Alex.
Hi, good to be here.
So I should probably tell the audience that we have become friends.
I became aware of you at the point that you showed up in the early days of the Dark Horse live streams and the Dark Horse discussions with Robert Malone, Steve Kirsch, Pierre Kory, etc.
There was a lot of criticism at the time of the Dark Horse podcast and you Who I now understand are a member of the rationalist community.
Maybe you're a recovering rationalist, I'm not sure.
But in any case, you showed up and volunteered to build a protocol to see whether or not the Dark Horse Podcast was any good at analyzing Scientific work surrounding COVID and its various potential treatments.
I must say that when you emerged to do that work with the project that you called Better Skeptics, I believe you were responding to Heather's suggestion that we need better skeptics in the world.
When you showed up to do that work, I was Not happy about it, because your method involved scrutinizing transcripts of the Dark Horse Podcast.
And while we certainly endeavor to say things that are true and to correct any errors, transcripts always make a person seem like an idiot.
You know, the way we talk is not the same thing as the way we write.
It doesn't have the same level Of care and precision, and so I was concerned about you doing that.
But suffice it to say, you did an admirable job.
You set up the thing, but you also had referees who were not you, so it was independent.
Do you want to just quickly say what you found so that we can put that behind us and get to the TOGETHER trial?
Sure, sure.
Yeah, so that So, the back story there was that I was, you know, I was a listener of Dark Horse.
I had a good impression of Yuri at the time.
When I saw the two sort of collide, right, I was like, okay, this is interesting now, right?
because, you know, try to fact check, like, what comes out of official sources or whatever, that's a far more fraught situation.
Here, we have some people who have, you know, a good track record, and they're in conflict, and that's actually something that's, you know, worth digging into, so we did that project, and the reason I had some confidence that the transcripts are not as bad as you might and the reason I had some confidence that the transcripts are not as bad as you might immediately think is because I had already, I was already challenging people on Twitter to say exactly what the problem was, because I was hearing ambient, you know, the hate was kind of like, you know, faceless and
So I was like, all right, like, I'm willing to, you know, I don't know what I believe here in this in this conflict, like, Can you point me to somewhere?
And I had people sort of start pointing me in various directions.
Look at this minute.
Look at that minute.
And I'm like, it doesn't say what you think it says.
And I literally started typing down.
It was a Steve Kirsch quote where I thought, okay, if anybody said anything outrageous in the Kirsch Malone Dark Horse episode, that would be Kirsch.
And I look at it and it's like, no, he actually used three separate qualifiers and he said exactly what he meant to say.
And that was a great, Expression.
If you, you know, if you bleach it, there's nothing left of the quote, but like random words that you put in succession.
Sure.
But that's not what he said.
So then I was like, you know what?
I think the transcripts are going to be pretty good.
I didn't, I didn't read them closely.
But I think I got a good ear.
Like I catch when people are saying outrageous things, I catch it in real time.
And I'd heard all this stuff.
I was like, I had a background confidence that whatever it was, it wouldn't be something like blatant.
And you know, I mean, what we caught in the end, there was like two or three things that got caught.
One was that, because we also did, we did four podcasts.
One of them was your appearance with Pierre Cory on Joe Rogan.
And he mentioned that, you know, the WHO is still minimizing airborne transmission.
Technically speaking, a few weeks earlier, the WHO had reversed its position.
or four weeks, a few weeks earlier, the WHO had reversed its position.
Now you can debate whether, you know, what minimizing means, but, and honestly, like are we expecting Pierre to be like on the daily with like the WHO?
But factually, it was a catch.
Or there was another one about Zimbabwe where he had, I believe, personal contact with Jackie Stone over there and he repeated some things that I could find hints, but I also did not participate in the challenge, so I just let it come through.
And it's true that the Zimbabwe situation was not as clear as maybe Pierre presented it.
There was a few of those that we caught, but there wasn't something that was, you know, honestly, I think This is going to sound strange.
I think if the people who were trying to find claims were less rabid, had paid more careful attention.
If I was trying to do that, I would probably have found more things.
But my sense is that they would be in that range.
Normal sort of things you might say that are a little bit out of date or that, you know, were like just like broadly justifiable, maybe not so correct, but the, you know, we're talking about four podcasts, right?
We're talking about 11 hours of spoken word.
Just like put that in context.
You get one clip of CNN and you find more stuff wrong than we find in 11 hours.
So that was shocking to me.
I was really, really expecting a lot more.
Yeah, I was expecting more too, just because it is spoken word.
It's not something that one has written and then gone through to tighten it up.
So anyway, I was very pleased with how it came out, and I was struck by the fact that you were able to do a good and what I thought was a fair job, even given a mechanism that I thought was destined to flag many, many very minor errors even given a mechanism that I thought was destined to flag many, many very minor errors and not
But let's put that behind us for the moment, except to say that what we are going to do today is in some sense you continuing down the same road, which is to take things that are contentious in public and evaluate what's really there and what its meaning is.
And what we're going to do is we're going to look at the TOGETHER trial, which is a very strange scientific phenomenon in the sense That its conclusions were revealed more than half a year ago.
The paper that reports on what is going to be called the Ivermectin Arm, or maybe it should be described as the Ivermectin Arms, was only published recently in the New England Journal of Medicine.
And so in some sense, we've had the conclusion or the supposed conclusion of that arm of the trial for half a year, but have not been able to scrutinize the methods or the reasoning that went into delivering it, which is a very odd and upside down scientific phenomenon.
Now, the way we're gonna do this is we're gonna look at the piece you have just published.
You have published it on your own substack, so this is not something that has gone through any sort of official editorial review.
Nonetheless, you have collaborated with many people in generating it.
You have solicited feedback on it.
So in many ways, it's as scrutinized as anything that gets published, given the number of eyes that have read it and offered critique and the ways in which you have fixed it.
But nonetheless, it's going to be a little bit of an unusual Dark Horse episode it's gonna be challenging for people who are listening audio only you are going to put together a Something like a slide deck that will allow people to look at the figures that we are discussing They can you know scroll through in real time and you and I are going to be scrolling through together But not on not on the screen.
So anyway, you will give a of signposts that tell people where to be looking in this piece and they can look at it and they can read it for themselves.
I will provide links to your piece, to the original New England Journal of Medicine publication and presumably other artifacts like the Wall Street Journal article touting the publication, etc.
All right.
So with that, let us get started.
Where should we begin?
We are looking at your substack piece, which is titled The Problem with the TOGETHER Trial.
Right.
Vanilla title.
How do you call this?
It's such a big piece, you don't really want to put the conclusion in the title.
So, I think the place to start is to understand what the TOGETHER trial was.
I think you can't really make any sense of the conversation until you understand what the TOGETHER trial is.
The TOGETHER trial is ongoing, right?
In a sense it is, but we're talking about a period of several sort of parallel kind of studies that we're running and were started in early January, so mid-Jan, let's say, and end of 2021.
You know, around the time of the, you know, inauguration of the current president, etc.
Just to put it in context of, you know, what was happening in the world.
And then the trial ended, at least the part that we're interested in, on August 5th and 6th.
Yeah.
Maybe there's a little bit of tail in the data.
Maybe it went up to the sixth.
So, that period of time is what we're looking at.
And in that period of time, I have maybe the first diagram here on the paper is In the article is maybe useful.
Adaptive trials basically run multiple arms at the same time, right?
It's an elegant idea.
So, I'm going to try to help translate for people.
Sure, sure, sure.
What we're looking at is a composite trial where the same investigators were looking at the effect of different possible treatments for COVID-19.
At the same time, they were leveraging a... Basically, if you're testing multiple drugs, and they're all to be tested against a placebo, you can get a benefit by generating a single placebo algorithm, right?
So you have one placebo generating engine, effectively, and you can compare it to all of these treatments simultaneously, rather than having to do an independent one for each.
So, there's a kind of economies of scale issue here, and the arms refer to the different substances or protocols being tested.
Right.
Yes.
You know, you can think of it as an efficiency in terms of the funding you're going to use to generate results.
You can think of it as an efficiency even in kind of human lives, right?
You don't want to put too many people at risk.
It's an admirable sort of setup, and as I've delved into it, I've gotten sort of You know, quite enthusiastic about sort of the idea, but also quite concerned about the knobs it presents.
And, you know, when we built the RCT system, right, we put a number of, you know, the whole blinding stuff, and there's a lot of rules around how you do an RCT in order to prevent gaming.
Okay, so I want to clarify a number of things.
of that idea.
And we'll talk through it.
But my sense is that it hasn't matured yet to the point where we know all of the knobs.
Okay.
So, I want to clarify a number of things.
I'm going to try to stand in for the audience here and ask the questions I think they will obviously want, or at least some will want to know.
When you say RCT, you're referring to randomized controlled trial, which many people will have heard many times during the pandemic, is the gold standard for scientific evidence.
Now, Heather and I have challenged repeatedly whether or not it makes sense to think of it as the gold standard.
It certainly has an advantage.
Which is, if it is well done, it is excellent for taking what may be a very subtle effect and amplifying it so that you can see it above the noise.
It tends to control for all of the factors that you may not even be aware are impinging on your trial to reveal an actual effect.
That being said, it is also an easily gamed structure.
That is to say, it is sensitive if the underlying work is poorly thought out or if there is bad faith in its structuring, which we will get to.
So, randomized control trial is what you mean when you say RCT.
And this is a double blind randomized control trial double blind meaning that the whether or not any given individual was in a treatment arm or a treatment group the treatment group or in the control group.
Should have been opaque to both the experimenters and to the patients.
So, that is to say the sense that somebody is getting the drug and it is working or not working should have not been available as an input, right?
That people should not have known whether somebody was getting the treatment or getting the placebo.
Dark Horse has a new sponsor with this episode, The Spectator.
As the longest-running magazine in the world, and indeed the known universe, The Spectator eschews identity politics in favor of intelligent conversation and thought.
From the war in Ukraine to the ideological war in the classroom, from the rise of inflation to the rise of cancel culture, The Spectator has been dedicated to stimulating reporting and analysis since 1828.
The Spectator also covers the best in books, travel, food, wine, and much, much more.
The U.S.
edition of The Spectator has recently come ashore and is bringing its high-quality writing and analysis to U.S.
audiences for the first time.
We have a special offer for listeners of Dark Horse.
Sign up today and you'll receive three free months of the print magazine and full digital access.
Plus, they're going to send you a free Spectator hat, one great for free Spectators.
Just go to spectatorworld.com slash special offer and use the offer code Dark Horse.
Personally, I'm a fan of The Spectator because it is committed to the quality of its reasoning and writing, not to a particular political party or ideology.
It's got amazing contributors, including Douglas Murray, Lionel Shriver, Julie Bindel, Christopher Buckley, Roger Scruton, and Dark Horse's own Heather Hying.
The Spectator is less political party, more cocktail party.
Whether you lean left or right, you are guaranteed to be entertained, informed, and enlightened from cover to cover.
So sign up today and get three months of The Spectator for free, plus a hat that you can promise to eat if you're wrong, which you're unlikely to be as a reader of The Spectator.
Subscribe today at spectatorworld.com slash special offer.
Use the code darkhorse at checkout to redeem your offer.
That's spectatorworld.com slash special offer and use the code darkhorse.
This episode is sponsored by Farolife, which makes skincare products from animal fats.
You heard me.
When you work with your hands, your skin gets rough.
Exposure to the sun and wind will do the same thing.
Skin food by Farolife is a terrific solution.
A little goes a long way.
It's made in small batches here in the US, and the fat is 100% sourced from farms that use regenerative and pasture-based animal husbandry, which Feralife calls Smart Lard technology.
If you've got sensitive skin, or a baby with diaper rash, or a small child with eczema, or your hands are simply chapped from a night of grave robbing, you should try Feralife Skin Food.
Ferrolife uses no artificial chemicals or preservatives, and the products are highly effective at moisturizing human skin.
It really does work, and it doesn't make you smell like a girl either, not that there's anything wrong with that.
Ferrolife is a young company, the first skincare company of its kind, and it is eager to produce a diversity of healthy, high-quality products including soap, deodorant, and lip balm.
After all, the lard works in mysterious ways.
Here at Dark Horse, we love what Faro is doing, and want to see them succeed.
When you discover how effective their products are, you will too.
Apply a little Skin Food daily to restore skin health, elasticity, and moisture.
Dark Horse listeners can save 20% off their first purchase by going to faro.life slash darkhorse or applying the code darkhorse at checkout.
Additional 15% savings by signing up for a subscription to receive Faro Skin Food on a monthly, bimonthly, or quarterly basis.
That's ferro.life slash Dark Horse.
Alright, I'll let you pick up from there.
We've got a randomized controlled trial, double blind, proceeding with multiple arms, that is to say multiple drugs under test against a placebo.
Right.
And that's sort of the adaptive sort of innovation on top of the RCT sort of system.
So you have multiple things running.
And so when the trial started, they were trying metformin, which is one drug that ended up being cut off early.
It didn't seem to be doing anything.
They were trying ivermectin, but at a dose that, you know, we might consider homeopathic.
I don't know.
Like it was a single dose of 400 mcg per kilogram.
And that was if you are exactly 60 kilos.
If you're above, you go less.
Wait, wait, wait, wait, wait.
So when you say homeopathic, you're kidding.
I'm kidding.
And the basic point is that homeopathy, which Let's leave aside what we think of it, but the basic problem as us scientific and rational folks have imagining that a homeopathic remedy might work is that the basic method for generating a homeopathic treatment is to dilute it so much that there is effectively
No, effectively no trace even at the molecular level of the drug in question or the substance in question.
So when you say that the initial ivermectin protocol involved a single treatment of what was the dosage?
0.4, sorry, 400 mcg per kilogram.
Okay, 400 mcg per kilogram in a single dose.
So, anybody who is aware of treatment protocols that were being applied by people who, by doctors on the ground, No, is that that's way too low and a single even so the dosage per kilogram of body weight is low and a single dose is an absurd protocol in light of what the doctors who were using this to treat patients believed worked.
And then you're telling me that there's a third issue there, which is... I mean, yeah, in an attempt to be extremely fair, I did go and look at the FLCCC protocol at the time that this protocol was being designed.
So, not even when it started, but when it was designed, the FLCCC was recommending two doses of 200 mcg per kg.
However, this dose did not scale above 60 kilograms.
So, if you're 120, which is close to my weight, you'd get half of that.
Right, and it was not with Emil.
Anyway, so even 400 is basically a title number.
If I'm trying to guess the average that a patient would have gotten in that trial, it would be more like 250.
So, let me try to clarify this.
The question, so first of all, people should know that there is a well-understood problem with what are called underpowered trials.
If you want to do a trial and get no effect, one way to do it is to use way too little of whatever it is, right?
If you want to test, for example, whether or not water is a good treatment for dehydration, And you test a thimble full of water in a population of 100 people dying of thirst, the thimble full of water will have some effect, but it will be so tiny that you will have almost no chance of spotting the difference in how long it takes those.
You know, this is obviously an absurd example, but So, an underpowered trial is a well-understood thing.
So, one thing you might do if you were concerned that this was an underpowered trial is you would say, well, what dose did they use?
And you would look at this dosage and you would have a per kilogram measure and you would say, well, it's not that underpowered, okay?
But the point is if you're only looking at the amount of ivermectin per unit of body weight, it may not be that underpowered.
A single dose is 50% of what you're telling me that the FLCCC was using at the point the trial was designed.
Plus, there's a kind of hidden factor here, which is that for reasons that maybe you'll tell me that they had some explanation, but for reasons that are not obvious to me, they had a 60 kilogram cutoff.
So the bigger you were, the lower the dose per unit of body mass, which is strange in light of the fact.
That A, this is a fat soluble drug.
And so people with high BMIs might be expected to have this drug absorbed into adipose tissue and dropped to very low levels.
Um, and I think, And then the next thing you're telling me is that they gave it on an empty stomach, which everybody who is aware of what a good procedure would have been for ivermectin It knows because basically if you want the ivermectin to stay in your gut because you have parasites and it is a very well understood anti-parasite drug, you would take it on an empty stomach so that it didn't get absorbed into the body.
But if you're treating COVID, you would want to have it absorbed into the body so you would take it with a meal that had fatted.
Just to pull this all together, because I want to try to be very fair because the concern I'm going to raise is going to be very significant.
So if we go back to December 20th, the FLCCC was not recommending it with Emil.
In mid-January it was, but they would have had to revise their protocol to catch up.
Anyway, so it was known at the time it started, but maybe not at the time that they designed it a month earlier.
The dose ultimately, what we understand of the demographics of the trial, it was half it was high BMI over 30, half was low BMI.
So, you can sort of try to extrapolate the weights and stuff like that.
My sense is that ultimately they gave about 250 mcg per kg in a single dose, if we just average it out over all the patients.
But under dosing, The biggest BMI patients who are at most risk from COVID.
So, even that is not stating it too… If we, I don't know, if we balance it all out, probably what you said half, you know, an effective 200 mcg per kg is what, you know, what would be the equivalent that you would get, you know, with a lot of extrapolation and asterisks.
Okay, so I want to highlight something else, and I can see already how much trouble there is, but in some sense this is, I think, what the audience is going to need to understand, is that the reason that a randomized controlled trial might be an excellent kind of evidence
Is because it very powerfully takes everything that could possibly affect a trial that is not the treatment and neutralizes it by getting it equally into the control group and into the treatment group.
The problem is when you have a protocol that is so Baroque that and we have watched many people I don't know how many people have been looking at this trial to try to figure out what happened But we have what is at least ten maybe dozens of people spending many many hours looking at the methods and just trying to reconstruct what actually took place.
That should never be.
It is vital.
The value of something like a randomized control trial is completely dependent on the fact that we can scrutinize the method and say that if this is what was actually done then we know what these results mean.
If we can't figure out what was exactly done, if it's basically like a criminal investigation trying to sort out who did what when, what was the dosage that a person would have experienced, right?
If we have that level of complexity, the thing is effectively invalid on arrival, right?
You can't have something so baroque that you don't understand the experiment because it is vital that we be able to scrutinize it.
It has to be able to be pressure tested.
And if we don't know what happened, then we have to speculate.
Yeah, and if I may, the misunderstanding that many of us had early on about the placebo group, right, having different sizes and things, and some people said online that, well, you just don't understand adaptive trials and, you know, clearly you just are showing your lack of understanding.
That particular feature that we all got caught up on, it was an innovative feature of this particular trial.
Right.
This was not even among adaptive trials.
This was a specific.
So, you know, it's a form that's evolving.
There was a lot of complexity in figuring out what was going on.
And I think anybody who says that you should be able to just like, you know, just read the paper and understand.
I mean, I think we'll make it very, very clear as we go through the work.
But they're fooling themselves.
You're talking about the issue where many of us, including me, thought that there was evident in the paper a number of people who went missing from the placebo group, who dropped out, and it turned out that was not the case.
Right.
That was an error in interpretation, but the fact that many people made the same error suggests that that was at least an intuitive understanding of the paper as presented, and so this was a natural misunderstanding, right?
That one has an innocent explanation.
Sure.
Not all of these things you do.
I made the same mistake, and then as soon as I figured it out, I was going around sort of trying to explain this to everybody so we don't keep saying things that didn't have validity, because again, I do believe there's actual issues that have real validity.
Right, and so anyway, as a general matter, let's just say, let's give them their due.
This is a very complex endeavor for many reasons.
It would be impossible to do it without making mistakes.
What one hopes is that the design is robust enough that the mistakes are not consequential.
But some of the things that people have spotted in this trial turn out to have innocent explanations or not be very important.
Others do not appear to have innocent explanations or not be very important.
Could be that the people who ran the trial didn't spot the errors and that they would become aware of them as people scrutinized their paper.
The editors should have caught it, the peer reviewers should have caught it, but that, you know, it's not so unusual that something emerges and then issues crop up.
But at some point, The fact that the authors are not forthcoming with data that would allow us to figure out for sure, that they don't answer questions that allow us to nail down exactly what did take place in their methodology, and that they are cagey or inconsistent in responding to questions about what they did.
All of those things suggest something else, and we'll get there.
So anyway, there are innocent problems.
There are other problems that look not so innocent.
Let's get to them.
So, as we said in the beginning, they started with metformin, low-dose ivermectin.
That's how the arm is called, whenever they mention it.
Fluvoxamine was the other arm that was on, and of course, placebo.
So, they were running a 1-1-1-1 randomization scheme.
So basically, it was not exactly like this, but just abstractly, you can think as the patients were coming into the various trial centers, they were evenly allocating them to the four arms we mentioned.
So evenly allocating in a way that would have been blind to their characteristics.
They should be randomly allocated to the multiple arms of the trial, including the placebo group.
And the result of that should be that any sort of pattern in the way people showed up should be disrupted by an algorithm that isn't scrutinizing them in any way, is just simply assigning them to one treatment or another or to the placebo group.
So that should destroy spurious correlations that would be in the clustering of people somehow.
Yep.
So, here's where things start to go kind of interesting.
So, on the 15th of February, this is less than a month into the trial, there's a protocol that appears, a new version of the protocol of the trial.
It's dated February 15th and it's submitted to the Brazilian authorities on February 19th for approval, for ethics approval, to change how the trial works.
This is how you do it, right?
That's a good practice.
You change your protocol and then you send it to the authorities of the country you're in to make sure that you're not doing something they don't Consider to be correct, you know, giving too much, giving too little, whatever it is, they need to have a say.
Here's where things become interesting.
So that protocol was recommending stopping the ivermectin arm because the dose was so low, as we said, to make the results not interesting, essentially, right?
By that time, there was quite a bit of uproar.
They mentioned advocacy groups that reached out to them.
For whatever reason, they said, look, we're going to stop.
The Ivermectin arm, the low-dose one, and we're going to start it again with a higher dose.
Good.
The problem is that they continued recruiting patients into the low-dose arm after they had submitted it to the Brazilian authorities.
So, you now have a situation where you are recruiting people into an arm of a trial that You probably don't believe that the dose is going to be effective, right?
So why are you doing that?
But even if you did, let's say you say, no, no, I think it's effective, but others don't, whatever.
Well, you're going to throw away the data.
You're going to stop the trial.
It's going to be thrown away.
So why are these people being put into a trial that you don't think is going to amount to anything?
And indeed, we have not seen that data.
The data of the arm that was cut short We have no sense of, you know, what those numbers were.
So, that's, I find it particularly concerning because, you know, it's somebody who's sick, high-risk patient, enters a trial, you know, you want to think that you're doing the best according to your state of knowledge at the time, and that doesn't, I don't, I can't understand how the researchers put those two stories together.
So, let us say this is a case where Something very odd happened, right?
Whether that odd thing is consequential is not obvious, right?
It is perfectly possible to continue to run an arm of a study that you know isn't useful, and to never report that data, and it's an absurd thing to do, but it's not necessarily destructive of the conclusion that you ultimately do report based on other patients.
On the other hand, One can certainly think of reasons that you might continue to recruit or to continue to treat and I think recruit into that arm of the trial even though you didn't intend to report the data, right?
It's not easy to think of reasons but it's not too difficult either and we will get to some possible reasons Ned but ultimately I think the answer is going to have to be we don't know and we shouldn't have to speculate.
Yeah, so just to give a sense, before February 15th, where the new protocol is dated, there were about 19 patients recruited into the low-dose ivermectin arm after there's 59 patients.
So the vast majority of the patients was recruited after they had already determined to change the protocol.
Who knows when they started talking about changing the protocol.
The 15th of February is the date of the protocol.
It's on the protocol itself.
Anyway, so that happens.
And then on March, what I estimate is March 4th.
And this is by, by the way, what am I using?
Where are these numbers coming from that I'm talking about?
These are coming from materials released by the investigators themselves on August 6th, literally one day after the trial was completed.
They came out with the results, but you mentioned that it came out in the summer.
So they showed us a graph, which is kind of the second visual in the article, which kind of showed per week how many patients were recruited into each arm.
So that gives us a lot of information, not only about the totals.
This graph, by the way, agrees with all the publications they've put out.
It's not sort of inconsistent.
It's not some sort of throwaway.
The data checks out with everything they've published in their papers.
So we know quite a bit about what the pace of enrollment was in each arm.
So around March 4th, based on that data, they stopped the Lotus Ivermectin arm.
There's not an obvious reason why that was the date.
And then around March 15th, the Brazilian authorities approved the new protocol.
In their paper in the New England Journal of Medicine, they say that approval came on the 21st of March.
I'm not sure why.
And on March 23rd, they start enrolling patients in the new arm.
The protocol is declared to clinicaltrials.gov, which is kind of where you go to make your, you know, all serious trials sort of go there to put up their protocols to make sure that they have an assurance for everybody that that protocol was their protocol at the time, right?
The government sort of Reassuring everybody that they're not going to let you change that after the fact.
So, we know March 21st what their protocol was, and by the 23rd, they started enrolling patients into what essentially seemed to be like the rebooted ivermectin trial, right?
Which started with a low dose, we cut it short at 77 patients, We had a bit of a break, and now we're starting over with a higher-dose ivermectin arm, while the other arms are ongoing.
The fluvoxamine arm from January is ongoing, the metformin arm from January is ongoing, and the placebo arm is ongoing.
Yeah, so far so good.
I will say that you have put together a easy-to-understand timeline that has these things as different strata, so you can see where the low-dose arm stops, the gap in time, you can see the higher dose booted up, you can see where we think various approvals and things happen.
Anyway, it's all visually presented here and Let's put it this way.
There's no reason that them pointlessly continuing to recruit people into the low-dose arm, there's nothing fatal about that.
There's nothing fatal about them discovering that their protocol is wrong and starting from scratch, as long as they say, here's the period of time in which we delivered this dosage, etc.
The question is, is this about something?
Is there structure to what took place in here that actually skews our understanding of what they report?
Or is this just the difficulty of running a complex trial and, you know, discovery along the way of what you should have put in motion to begin with?
And anyway, we'll see that.
But I would advise people who are following along to look at your timeline to understand what's taking place.
So, March 23rd is really the date where things start to get really strange.
So, the third diagram on my article is I've focused on a 10-week period between February 22nd and the week starting April 26th.
In that sort of 10-week period.
We have two weeks where allocation is sort of normal.
It's the end of the low-dose ivermectin arm.
We have two weeks where there's no allocation to low-dose ivermectin because that arm has been stopped, which is normal.
And then we have the first two weeks of the high-dose ivermectin arm.
These are the two weeks where things start to look Really strange.
So, when you say allocation, you're talking about allocation of new recruits to this trial.
These are patients.
And these are people who have COVID.
Correct.
So, these are Brazilians who have COVID.
Antigen positive tests.
So, it's not a PCR.
And really, this is actually maybe interesting to mention here.
This is done in Brazil, right?
During, you know, the The early 2021 period, everybody can sort of remember here, but there, that's exactly when the gamma wave, the gamma variant wave is sort of at its peak.
And the pictures that are being sort of transmitted, even in the paper, you can read in some of the appendixes about what was going on, is like apocalyptic.
You kind of get the picture of a Early pandemic Italy.
That's my picture in my head.
People being hospitalized in corridors, facilities being overrun, people needing to set up temporary facilities to work with the volume of patients, that sort of thing.
So, that's what's happening in the surrounding world around this trial.
I think this is vitally important and people who are not experienced thinking about scientific experimental design are not necessarily going to intuit this, but the experimenters can do nothing about the fact that in order to do their trial, they are interfacing with a fluctuating pattern, right?
We are talking about COVID.
It's not consistent over time how many people have it, which variant they have, how sick they are getting, how people feel about it, which matters because of course that's part of why we have the placebo group is that people's perceptions play into these things.
So, in effect, what we have is a trial that is recruiting sick people, sick people who have vulnerabilities, comorbidities, right?
And they are being recruited into a trial.
It's voluntary, right?
And so that the fact of them showing up Is likely the result of the fact that they want, you know, imagine yourself facing the question of should I participate in a scientific trial of a drug that might be effective against COVID?
What motivates a person to do that?
Well, one of the things that motivates a person to do that is I'm sick and I'm frightened and if I enter this trial, there's a 50% chance that I'm going to get a treatment that Somebody thinks might be effective, right?
So, 75 is adaptive, so even higher chances.
So, in any case, you have people volunteering to enter the trial, hoping to get a drug that will treat a disease that they have and are concerned is a danger to them.
And the degree to which they are afraid, the degree to which they are sick and what they are sick with is changing over the course of this trial, which the experimenters can do nothing about.
The background is going to fluctuate.
What they can do is they can make a protocol that neutralizes that effect, right?
Now, how do they do?
So, this is where things become strange, and I can find no explanation as to what happened on that week that starts March 22nd.
On March 23rd, the second day, they start recruiting into the high-dose ivermectin arm.
Now, that week, we would expect, because all four arms that we described are high-dose ivermectin, metformin, fluvoxamine, and placebo.
We would expect roughly 25% allocation to each arm.
Instead what we have, and it doesn't have to be exactly 25% just to be clear, like there can be some fluctuation because there's block randomization certifications, there's reasons why that might play, but that week we see 57% of all patients that come in are allocated to the ivermectin high-dose arm.
This is at the peak of the gamma variant wave.
In the first week of the new sort of arm that's started.
So we have to unpack here a little bit.
There are two kinds of error.
Two kinds of error that can creep into an experiment.
The randomized controlled trial is designed to neutralize the noise.
However, to the extent that the neutralization mechanism is not effective in actually randomizing, then the point is it actually introduces structure, right?
So what you're getting at, which is I think going to be quite subtle for people, Is that the moment in time in which people enter the study is predictive in ways that we know and ways that we don't know of what their their prognosis will be.
And so to the extent that the allocation that the randomization by the algorithm does not is Structured with respect to when it allocates people to what, right?
You are going to get a non-random effect, right?
That is to say, if you've got a clump of people who come in the door at the same time, then they may be sick with something less severe, they may be sick with something more severe.
As long as they're equally distributed through the various arms, it doesn't matter.
But if they're lumped, then it matters in a critical way.
Sure, sure.
And again, keep in mind that also the facilities are overrun, which means that people might be getting less good care.
Right.
Ah, so this is another, I hadn't thought of this one, but this is another subtle effect, which is that as you get a wave of COVID, even if they were sick with the identical thing, you obviously have limited medical resources.
And so the more people who are sick at one time, the less good care they would get, the worse their prognosis would tend to be.
Right?
So, the point is, time plays a critical role here, and what one needs is a bomb-proof method for neutralizing the structure in time.
And, anyway, I know you're going to get deeper into what happens.
Yeah, this is what's called confounding by time, technically, right?
So, unless your patients are, you know, roughly in the same sort of time frame, then you are using a different... this offset itself can introduce the bias, is what you're saying.
Yep, absolutely.
It can introduce a critical bias and frankly if the bias... It may sound to people as if random error is worse than systematic error.
Random error sounds scary, but random error is not a terrible problem in science.
It can be swamped out by simply increasing the size of a study.
Systematic error is the dangerous thing, and what we are talking about here are systematic errors.
It's the very reason we do a randomized controlled trial, right?
What you were saying before, that this is at the root of the motivation of doing it this way.
If we're not doing it this way, then we're not doing it this way, basically.
All the guarantees that you're getting are sort of starting to exit the stage.
Right, and so it is also the case that we who have talked about the effectiveness of Ivermectin, who have discussed it.
We have been faced with the claim that any evidence that doesn't come through a large-scale randomized controlled trial is unimportant and we must await the large-scale randomized controlled trial because of its ability to neutralize these kinds of effects.
We then have the conclusion from this trial presented more than half a year before we are able to look at the methods.
Largest randomized controlled trial of ivermectin to date discovers Uh, that there is no, uh, effect.
Now we will talk about whether that's actually what it discovered, but nonetheless, that's what we were told.
And then there was a more than half a year delay in our ability to figure out what was in this trial.
And then on looking at what was in the trial, the methods and, and, and what was done, the point is, well, it wasn't a randomized controlled trial because the attempt to randomize was structured in a way that was ineffective relative to the background fluctuation.
So, you know, either it doesn't, It doesn't matter, by the way, if it was done on purpose or by accident.
We're going to talk about maybe some explanations about what could have happened, though the explanations are separate from what did happen and what that means.
But it doesn't have to be on purpose.
We're not necessarily saying that this particular blip was somebody putting their thumb on the scale or something.
We're not saying it wasn't, but there could be reasonable explanations.
What we're saying, though, is that the effect on the results Is orthogonal, is completely separate to the motivation question.
Well, the point is, it looks like a randomized controlled trial.
The intent was apparently to run a randomized controlled trial, but if the randomization fails, then you can't say, well, this is the gold standard because it was a randomized controlled trial.
So, effectively, the point is, what is this trial evidence of and how should it be integrated with the other evidence we have?
Is the question, and the label on the box saying it was a randomized controlled trial means nothing if the randomization wasn't effective.
So just to continue with the data we're talking about, the first week it's 57% on high-dose ivermectin.
Second week, it's 41% high-dose ivermectin, right?
These are the allocations.
By the way, keep in mind that this means that the other arms are also being under allocated, right?
So, if you got 57%, you know, just roughly 60 going to one arm, you'll have roughly 15% going to each of the weights.
You'll have 40 by 3, whatever, like 13%, 12%.
going on average to the other arms, right?
So the disparity is huge, like, you know, 57 versus 13, something like that, or 14% say, yeah, something like that.
So I guess the way for people to think about this is imagine just as a thought experiment that COVID was a perfectly uniform phenomenon in the background, that every week was like every other week with respect to how many people were sick, what they were sick that every week was like every other week with respect to how many people were sick, what they were sick with, If there was no variation in the background COVID phenomenon,
Then the allocation to one arm in a particular week versus the other arms wouldn't end up having a consequential effect in the data.
It still renders the trial not a randomized controlled trial, but if the background wasn't fluctuating, maybe it would have no impact on the conclusion.
On the other hand, if one imagines that some weeks are terrible COVID weeks and people are sicker because the doctors are stretched too thin or the doctors themselves are sick, or whatever the effect, or there's a change in the variant that's circulating in the place where the trial is being done and people are sicker with a worse disease.
If that structure exists where some weeks are worse than others, then you can imagine if the people who were sicker got disproportionately loaded into the high dose ivermectin arm, then it would make ivermectin look less effective.
Because the point is the outcome that's being measured, which in this case is hospitalization, people would be hospitalized for reasons that weren't about the treatment they were getting.
It's because that was a sicker week.
Yep.
So, yeah.
So, moving on about, what is it?
So, four or five weeks later, the allocation balances out, right?
And it starts to operate in a fairly predictable way.
So, if anybody's following along, you see the next diagram with the curve.
You can see that after, was it like April 4th, I believe?
Maybe April 8th.
The allocation starts being smooth.
All the arms are growing at the same rate, which means that they're kind of uniform.
Whatever was going on, the allocation now has been restored and it continues being sort of regular.
So, we're left with this strange sort of four-ish weeks at the beginning of the high-dose ivermectin arm, where we had more patients.
Allocated, just even arithmetically, we had more patients allocated to the high-dose ivermectin arm, and then we don't have equivalently more patients being allocated to the placebo arm that is supposed to be at the same period of time.
So, in their paper that they published, the authors swear up and down that all the patients that we're looking at are after March 23, both for placebo and for treatment.
As clear as it gets.
But when we see there are normal arms and various publications for fluvoxamine, for metformin, and we assemble the results, There's, at the end of the ivermectin trial in August, we have 679 patients for ivermectin.
Okay, that's close to what they had said, even though there was some ambiguity, but let's grant that.
But we have, I believe, 604 patients on the placebo arm that could be compared against ivermectin.
Yet, what they report is 679 placebo patients So, there's this weird 75, you know, the ghosts of Belo Horizonte, you know, the city where most of the centers were.
Seventy-five patients, we don't know where they came from.
Now, there is one theory, one hypothesis that fits everything we know, which is that, remember the two-week gap we were talking about before the start of the high dose?
Well, that gap has exactly 75 patients.
Extra in the placebo arm, right?
Yep.
So, fill that in.
What does that imply?
So, the hypothesis, just to spell it out, is that those 75 patients were used.
So, the placebo arm was taking over a longer period of time Then the Ivermectin arm, right?
It had two extra weeks at the beginning.
Now those two extra weeks at the beginning, when we look at the case fatality rate from the area, had a lower case fatality rate.
So it was right before the kind of burst of the gamma wave.
Gamma wave?
That sounds like, anyway, gamma variant wave.
There was no radioactive radiation to discern anyway.
Maybe I should look into it.
But so we have a lower Mortality, actually, in that previous period right before.
There's another problem, and this problem I just can't... Wait, wait, wait.
I don't want you to go on to your next problem yet because, again, this is too subtle.
Okay, let's make it more...
The hypothesis is correct that they used the placebo-recruited patients from an earlier period than the ivermectin, the high-dose ivermectin patients to whom they were being compared.
Then that is a failure of control, right?
So, we had a failure of randomization, and now we have a failure of control.
And the reason that that's a failure of control, and in this case, there are lots of ways this could go, but because the weeks in question were less severe COVID weeks, It makes the placebo group look healthier.
So, if you took the high-dose ivermectin patients from a week when things were particularly bad, they would tend to be sicker.
And if you take the placebo patients to whom they are being compared from a relatively less sick week, then that will tend to make the drug look less effective, whatever its actual effect is.
So, that, if it is true, Is the second utterly fatal failure here.
Now, I want to point to a third failure.
The obvious thing is you say, well, here's a hypothesis about where these patients came from.
You researchers have published this paper.
If those placebo controls were not from this period where it seems likely they came from, Then you researchers have the most incentive of anybody to simply reveal what the discrepancy in the data is about, and they should be eager to do it.
And to the extent that they are not being forthcoming about showing their data, that is odd.
Right?
You have an anomaly.
It's not unusual that a paper is ambiguous in terms of what it presents, but the right remedy, the normal scientific remedy, is for a request to be made to see the data and for that data to be provided so that it's clear that there's nothing being hidden.
Right, right.
And in fact, as per the Fluvoxamine paper that was published in, I believe, October, In the Lancet, if you look at the data sharing statement, they say, you know, upon publication of this manuscript, we will make the data available through some procedure that, you know, I have more questions with, but the long story short is that that didn't happen.
There is no access to the data as of October.
They said the same in the publication now in the New England Journal of Medicine.
Still not the case.
Now, in an email, the principal investigator said, we're busy because we're submitting a EUA for this other drug that succeeded, and we'll do it.
But that will go to some body that you have to submit a research proposal, and that body, in coordination with the investigators, will review and approve.
This was not what was promised originally.
They said, upon request, upon termination.
Right?
When we're done, ask for data.
You know, if you look legit, we'll give you the data.
That's not what's happening, and actually, none of those two things are happening.
It's not upon determination, right?
So, we're still waiting, and the process is looking, starting to acquire hoops to jump through, which, look, we are in a situation.
Look, I hate to use this metaphor.
Maybe it's unfair, although maybe it isn't.
You know, you say, well, Uh, this that you've published does not add up as it is.
Can I see the data?
And they say, do you have a warrant?
Right?
I mean, it.
The question is why are they not providing the data?
They should be eager to do it because the fact is it would become obvious if the mistakes are trivial, it would become obvious.
If there are mistakes that are not trivial but they are made in good faith, that's not fatal.
But what is fatal is things that cannot be reconciled in what they published and a refusal to clarify what generated those anomalies.
Yeah.
Yeah, and so...
And it should be noted, right, that this is happening in a period of time and in a specific sort of molecule that's being investigated that has attracted a lot of scrutiny.
And to the degree that the studies are being scrutinized and bad science is being found out, I absolutely have no objection personally.
We all learned a lot about what to trust and what not to trust.
We've gotten more skeptical as a result.
I, you know, I subscribe to that program.
I do not subscribe to the claims that were made on the basis of those findings, that the entire research of that field is now a fraud or, you know, these kinds of things.
Because it actually, when I looked at the baseline, like if I get 100 randomized trials in anything, how many randomized trials do I expect to be garbage, right?
Junk, sloppy, fraud, whatever?
about 20%.
Which is shocking until you realize.
Well, it's crazy if you imagine, which most people do, that the purpose of science is to figure out what's taking place.
But what people don't understand is that there is a substantial and hidden landscape of perverse incentives, right?
Academics are competing with each other for grant money, they're competing for jobs, and the ability to publish something somewhere is part of the key to getting ahead.
So why are 20% of the studies, you know, dead on arrival in terms of the quality of the methods or what was actually done?
Well, it has a lot to do with the fact that you get rewarded for publishing something.
You don't necessarily get rewarded for having done the job well, which slows it down and means you publish fewer things.
In fact, you get punished for it.
So anyway, we did find we did find out that there's a shocking level of garbage in science.
But you're right.
The fact that even a fraudulent trial doesn't say anything about the trials that weren't fraudulent.
These are independent pieces of evidence.
And this is part of why some of us have pointed to meta analysis as a better standard, because unlike the randomized controlled trial, it's not easily gamed.
And at the point that you find out the study and your meta analysis isn't valid, either because it was methodologically flawed or fraudulent, you can just simply click it off and see how it affects the analysis.
Right.
It's a robust method.
So anyway, the.
There are many concerns here, but I would say if I was an editor who had published this and I watched the authors refuse to clarify significant issues that were raised by critics who are looking at the paper and asking obvious questions, that would set off alarm bells.
Yep.
So, remember we talked about the 75 ghost patients.
So, there's another difference between these patients.
And this is, I find it quite humorous for reasons I'll mention later, but it appears that the protocol that changed on March 21st, right, made a change from the original protocol that was in force in early January when the previous batch of patients, including the 75, were recruited.
The original protocol cited SARS-CoV-2 vaccination as an enhancement factor of your risk for whatever reason, which means that if you have other basic criteria met, this is kind of the big one, you get included in the trial.
So if you were vaccinated in that period of early January until early March, if you were vaccinated and you showed up at one of the trial centers in that period of time, that was a reason to add you to the trial.
Maybe they wanted data for that.
I can even come up with actually a reasonable explanation, which is that maybe in Brazil at that time, they were only vaccinating the high-risk patients.
So, they're like, this is kind of a proxy for high-risk.
I don't know.
But they would adjudicate the trial.
After March 21st, vaccination switched from being an inclusion reason to an exclusion reason.
So if you were vaccinated, they say, sorry, we can't add you to our trial.
This is like a pretty big, you know, some of the previous patients are vaccinated.
All of the later patients are not vaccinated.
So, if the issue of placebos and treatments for ivermectin being disjunct in time is what it appears to be, right, then the change in protocol from an inclusion criterion to an exclusion criterion is absolutely fatal.
Now, I know that the authors have claimed it's just a typo.
Am I correct?
It's not the authors.
It's not the authors.
Absolutely.
This is somebody.
Somebody on the internet.
Somebody on the internet.
Some person on the internet who is ever present when a certain molecule is discussed favorably.
Oh, well, okay.
That's the best critique we've seen.
I mean, it's so implausible, basically, that what he ended up saying is that this is so crazy that it could only be a typo.
There's no reason to believe it actually happened.
Which of course makes, I mean, given that it's not the author who has said it, we shouldn't dwell here, but it makes no sense as a typo.
First of all, it's hard to specify what the typo would be in any case.
So, what we've got then is an incredibly important anomaly.
Now, I will point out, you could make an argument for vaccination as an exclusion criterion.
You could make an argument for it as an inclusion criterion.
What is absolutely vital is that it does not change in a way that separates the placebo group from the treatment group.
Right?
And we appear to have the possibility here, and you can, you know, for those trying to follow how this would have worked...
To the extent that the vaccines have a positive impact on one's trajectory with COVID, if you have a placebo group in which people are included because they are vaccinated, then they will do better if you have a placebo group in which people are included because they are vaccinated, then they will And that will tend to make the treatment look less effective.
So another thing that people should be tracking is, OK, it's a complex trial.
There are lots of nuances.
The people running the trial are discovering errors that they make in real time and trying to correct them.
But when all of the errors go in one direction, a direction that would tend to make this particular treatment look ineffective, even if it was effective, that raises questions in and of itself.
Right.
It's possible to have errors that go in both directions.
You would expect them to be equally likely.
But when all of the errors go in one direction, that again sets or should send up red flags.
Yep.
And, you know, this is basically what we just described as kind of the heart of the issue.
And it's kind of one of, you know, it's a cluster of errors that happen around this one thing.
I have a hypothesis, I don't know if it's worth going into, about what might have happened, which is that There's a curious pattern about how the allocation happens, which is that the algorithm that's allocating seems to, after early April, balance what the fluvoxamine arm So, you know, on a certain date, there's like a certain number of patients on the fluoxamine arm.
There's a certain number of patients on the placebo arm.
And there's a certain number of patients on the low-dose plus high-dose ivermectin arm.
And those numbers are within, you know, one or two or three of each other, right?
They're very, very close and they're kept close for the whole time.
Why would the total number of the low-dose and high-dose ivermectin patients match the The number of fluvoxamine patients and placebo patients at the same time.
Well, my hypothesis is that, basically, when they stopped the low-dose ivermectin arm, or they halted it, or however they told the algorithm, don't send patients there anymore, and when they brought it back, rather, instead of saying, delete it, forget about this, or stop it, and then start a new one, they said, start sending patients to ivermectin again.
And what that did is when the algorithm sees the chunks it's made along the way, it's like, wait, I've made little groups before here that are short the patient now because the Evermectin thing that was supposed to be sending patients to, I wasn't doing that for a couple of weeks.
So, I've got to go back and backfill basically those little blocks I had made of patients to make it all add up.
And you would do that, right?
When you're coding one of these algorithms, because there's fluctuation, because there's randomness, you code it to recover.
You code it to sort of see differences between where it should be and where it actually is, and take actions to rebalance because that could be happening for all sorts of reasons.
This is a massive change, right?
There's a massive difference that it should not have been coded to recover from, but I can imagine algorithmically why you would code it to have some self-healing properties, basically.
Right, and self-healing properties are one thing if it is coded not to imbalance between the placebo and the treatment.
Right?
In other words, rebalancing the arms as long as you don't unbalance the placebo versus the treatment could be done in a way that would not be consequential.
But if you code your algorithm so that it just tries to get back to the state that it was targeted to deliver and that unbalances you between the placebo and the treatment, then it's again another.
The algorithm doesn't understand.
Think of a person in a black box literally giving letters to people.
You are a group A group that doesn't know about protocols, doesn't know about variants, doesn't know about – it seems that it might not even have known that the trial was a material difference before and after.
So it's just like how we model computers.
They are a very efficient person who is not very smart.
They just do what they're told.
I'm not saying that what it did was fine.
What I'm saying is what it did was as instructed.
Yep.
Yeah, it's how.
Yeah, exactly.
So, anyway, that could have happened or something else that I don't know about.
My field of expertise, for people that might not know, is in computing.
So, I can imagine how I might have done a coding error like that.
To the extent that an error was made, like they programmed an algorithm in a way that seemed reasonable but turned out to have consequences, the only acceptable thing for them to do is to acknowledge that this has implications, to share, frankly, the algorithm, and share the data that would allow us to look at the consequences so that we could adjust our understanding of their conclusion.
Exactly.
Yeah.
So, this is where, you know, I run out of good faith explanations because all of this could have happened for some reason that was not necessarily nefarious.
It's kind of a terrible coincidence.
It's the middle of a pandemic, chaos, whatever.
But from that point, at some point, it became apparent.
Right?
This is visible.
If I can see it, they can see it.
I don't believe they're that incompetent.
They must have chosen at some point to sort of obscure it.
To say, like, look, it's not a big deal.
We'll just publish anyway.
Well, I would also, you know... Or, you know, assuming the best here.
Well, I think you have to start out assuming the best, right?
It's not even necessarily justified, but it is the only practical starting point that does not result in chaos.
But the problem is...
It is highly abnormal and scientifically invalid to share the conclusion months ahead of time, to broadcast it at the highest levels.
And then, upon the release, it's not even that they broadcast the conclusions and the study was good, but we couldn't scrutinize it for more than six months.
They broadcast conclusions that were utterly black and white.
In fact, black and white far beyond what's actually written in the paper, if you read it carefully, right?
The title does not match what is in the paper.
The fact of that mismatch and the lack of responsiveness to the degree to which the conclusions, these black and white conclusions that are not mirrored by what they actually found, the degree of mismatch and the lack of forthcomingness amounts to something, right?
First of all, it is As far as the philosophy of science goes, absolutely fatal to the evidentiary nature of what they did, right?
Not clarifying the degree to which their conclusions are overstated because their method doesn't reflect a proper randomization or control.
We could take it as some kind of evidence if we could correct for it, but if they're not going to give us the information to allow us to correct for it, that is, scientifically speaking, the end of the evidentiary value here.
Yep.
And by the way, there's another conclusion in here that I resisted early on and do not like, but I must highlight it.
So, for the flavoxamine arm, right, first of all, the plan size was 681 patients.
They go through the calculations of how they made that specific number.
It sounds like, okay, why 681?
But it comes out of a mathematical calculation that they had done early.
I remember the arm indeed was cut at 679 for spite.
It's just too short.
That's fine.
But the fluvoxamine arm was allowed to actually extend to 742 in the original, actually, and 741 in the published paper.
I don't know what happened to the extra guy.
And then 756 placebo patients, which is, you know, I think a 9% overrun in patients.
And you might say, like, look, you know, they can't keep an eye on this all the time.
It's going to overrun.
Fine, but then the Evermectin arm was cut exactly short.
So, exactly in time.
So, they seem to have the capability to do it for one thing, but not for another.
I'm starting to get a little bit curious here.
And the anomaly that we're talking about, right, when we're saying that they over-allocated these patients who were potentially more diseased or more likely to have a negative outcome to ivermectin.
That also means they under-allocated them to fluvoxamine.
It makes me doubt the fluvoxamine result as well.
They over-allocated patients to fluvoxamine and under-allocated sickness, right?
Yeah.
I mean, this is clear.
And also, we're allocating in the before times as well, so they don't have the imbalance between the previous protocols.
We don't know anything about the intent.
Obviously, it's a multi-authored paper.
There may not even, you know, to the extent that there are flaws here, they may owe to one person's contribution or another.
But the point is, the weird structuring, right, where fluvoxamine gets an over-allocation of people If one were going to engineer a trial to reach an artificially negative conclusion for a drug like ivermectin, one way to do it is to, if the trial is unblinded, is to shunt people into another arm, right?
In other words, if you see people who are disproportionately likely to Survive or to not need hospitalization and they show up in another arm.
Then the point is, you know a a knowing experimenter Can rig a trial in this way.
I'm not saying anybody rigged this trial, but the point is the more anomalies like this you have that allow somebody to engineer a desired result, the more we have to ask the question if that's what happened.
And unfortunately in this environment we all know That there was tremendous pressure, right?
Those of us who talked about ivermectin got tremendous blowback for even mentioning the possibility that it worked.
And so what we have is an incentive to find that result and we have anomalies in this trial that would allow a Engineering of that result that are otherwise difficult to explain like why would you over allocate patients to the fluvoxamine arm?
So, two points here.
First of all, you're making a great segue to something that was said in a recent presentation of the results by the authors.
I don't remember the person's name.
It's Frank something.
He says, you know, there's a real question whether we cut this trial too short in light of the political pressure to show that ivermectin did not work.
So, this is not you saying that.
This is them saying that, right?
And that they cut it short.
But in light of the fact that they didn't cut fluvoxamine, in fact, they stopped it at exactly the prescribed time, and they didn't add a few more patients.
In fact, Mills, the principal investigator, in his email to Steve Kirsch said, I actually think the result is positive, and it shows a 17% reduction in hospitalization, and if we had only randomized a few more patients, I believe that it would have come out significant.
This is literally his quote!
This is stunning.
The principal investigator on this trial, this trial which has been heralded from the rooftops as suggesting that this drug has no effect, the principal investigator believes that the trial showed that it was not effective because they didn't have enough patients.
He believes they saw an effect.
Yep.
Right?
And that if they had randomized more patients, that that effect would have been such that the conclusion would have been different.
That is an amazing thing to be true coming from the primary author on a paper that the Wall Street Journal is telling us says ivermectin doesn't work.
Yep, yep.
No, it's kind of... I don't know what to say.
And it comes in these contradictions that we're hearing, right?
Because the Wall Street Journal quote might say, no effect whatsoever, or no indication that Ivermectin works.
The emails say other things.
The paper itself, yet, seems to have a split personality.
And we can go into that if you want.
Well, I do think this is important, and I remember back from when the ivermectin arm result was announced, seven or I guess it's closer to eight months now, is that right?
When that was announced, the poll quote Was no effect whatsoever, which even at the time it was clear wasn't true.
There was an effect.
The question is, was the effect large enough that we could be sure it wasn't statistical noise, but there was a positive effect.
It wasn't no effect whatsoever.
And here you have, you know, Seven or eight months later, you have the principal investigator saying he believes they saw an effect, that the trial wasn't big enough to clarify it.
So, how is it that journalists at the Wall Street Journal can't figure out that at worst, what we have here is a result that suggests ivermectin might work, but the trial wasn't large enough to spot it?
Yep.
Yep.
Now, it's worth mentioning that in the paper itself, and this is throwing off people constantly, they do not practice frequentist statistics, the usual statistics, the whole p-value, confidence interval, crosses the one line, statistical significance, that stuff.
There is not a single mention of p-value, statistical significance, or confidence interval in the whole paper.
Right?
This is crucial to understand.
Okay, what did they do instead?
I should say, that's not bad.
That's probably a good thing.
I love it.
I'm super pro, but we have to understand how to interpret it.
The benefit comes from interpretation, not, you know, what it was.
So they use instead credible intervals and Bayesian statistics, which is Well, I'm going to trigger some people and I'm not a statistician.
My sense is that it's more efficient than its use of information to reach conclusion.
So, I would argue, you tell me if you think this is wrong, it is more nuanced because effectively we have a problem which is that ultimately we're trying to get to a binary result in a world of nuance, right?
You want to figure out whether this is a drug you should or shouldn't prescribe, right?
And so, you can do that with a frequentist statistics and you can say, well, this result, you know, not only suggests an effect, but it was significant.
Therefore, we should prescribe it or something like that.
In the case of the Bayesian statistics, you have an ability to say, well, how strong an effect did you see without imposing that threshold up front?
Yep, yep.
So this trial, one of their sort of stylistic hallmarks in all the papers they publish is to publish these beautiful sort of Bayesian bell curves and how they intersect to visually convey.
So if they had a paper early on on hydroxychloroquine and Lipinavir-Ritonavir, Maybe I didn't say that right.
And the Flavoxamine paper has the same thing, the Metformin paper has the same thing.
The Evermectin paper does not have that, even though it's the only paper that has exclusively Bayesian statistics.
You have to go to the supplemental appendix, download that, go to a figure, I believe it is S2, and you will see something that will be shocking
Compared to the conclusions stated on the first page, which is that when they took what they call the intention to treat population, I think that's basically everybody, they saw a 79.4% probability of superiority of ivermectin over placebo, even with all of the caveats we discussed.
In the modified intention to treat population, which I think is if let's say you get added to the Evermaxing group, right?
But you have an event on the first day, right?
They kind of know that that has nothing to do with your Your treatment.
So, they take out some patients that weren't really treated.
That shows 81.4% probability of superiority of ivermectin over... So, let me understand what you're saying.
You're saying there are some things that you can spot along the way are not informative, right?
Yeah.
Some patients basically are having their event way too close to when they started taking to for the event to be attributable to what they were taking.
OK.
And so if you clean that up, you get above 80 percent likelihood of superiority of ivermectin over placebo.
Yeah.
And then we have to get to the even given all of the even given all of the structure that shouldn't be there with people being potentially recruited at different times.
we must say though that the superiority doesn't say how much superiority, It could be a little of superiority, it could be a lot of superiority.
They don't get into that in that particular analysis.
Right.
It's just that the event rate is lower, right?
Okay.
Yeah.
But even so, you know, those of us who have been tracking the Ira Mectin story know this argument well, right?
The fact is, you have evidence of superiority, And, unusual for this drug, you have an extremely low risk that is extremely well understood because of the long period of time over which this drug has been prescribed and the large number of people to whom it's been given.
I should mention, right, that we should expect the results that we see to be fairly, the difference we see to be fairly narrow because of all the things that we described.
If we did not see the difference being narrow, we would lose everything we know about ivermectin.
So, we know that they gave the drug I believe the median time was five days after symptoms.
So, half the patients were more, half the patients were less, and there's some question around that number I don't want to get into because we're going to be talking another, this is another 20-minute conversation, but we know that when you take that kind of population, there's quite likely an additional day of delay between they actually got the drug.
if we put it on the dose response, the timing, sorry, the timing response curve that we know about early treatments, we expect about a 40% improvement, right?
So when we have certain circumstances, we expect the result to, you know, the 17% that Mills quoted is not shocking to anybody.
It's like when you know, you know, three or four factors that went into those results, that's what you predict.
So I hope someday the full story of what took place with early treatment, especially ivermectin and hydroxychloroquine and COVID is sorted out by a team with resources to really dig and access to information after the the disincentive for honesty has dissipated.
I hope we ultimately get the full story, because what this sounds like to me is.
The deck was stacked by the house and they still lost, right?
If you'd look at what they actually discovered, the point is there were all kinds of reasons that should have dampened the effect of Iremectin to the point of being useless and yet they still saw an effect and this isn't the first time we've seen that, right?
One thing that's on my mind I want to make sure I cover is the 681 number.
This number was determined based on the background rate of events, based on their primary endpoint in the region at the time.
On that background, what they were tuning in to do was to spot a 37.5% effect.
This is what you're sort of assuming.
If it's less than that, you have statistical power issues.
With 80% power, So, when they come out and say basically no effect whatsoever, no indication whatsoever, no.
What you tuned your study to do is find with 80% statistical power, 37.5%.
So if you have a drug that is 37.5% effective, you expect four out of five times, if I understand the notion of statistical power correctly, to catch it, to find it.
That's what they did.
In fact, they were kind of criticized in an open peer review about that size.
They're like, you're aiming to find too big an effect, too low the power, why aren't you ramping it up a little bit?
And the author, the principal investigator His response was kind of dismissive to those comments from the invited reviewers.
But if we strictly interpret what they found, they did not find a 37.5% effect in a trial that had 80% statistical power.
effect in a trial that had 80% statistical power.
Right.
That's what happened.
That's what happened.
And just so it, Essentially what you're talking about is a Bayesian version of what typically happens with a frequentist where they find an effect and it doesn't reach significance and they report that there was no effect.
We've seen that before in Ivermectin as elsewhere.
But the point is, in this case, it's in a different language because it's Bayesian statistics and not frequentist, but it's the same problem.
We can see, if we look at what they actually found, that in spite of many biases, all of which go in one direction, they did find an effect, a notable one.
And when you compare this, I mean, let's not lose track of what we're actually trying to discover.
Is this drug worth giving to people who have COVID, right?
Now, if the answer to that question was yes, then you would likely do what we've seen done all over the world, what we've seen done in India, what we've seen done in Mexico, where basically people are given access to this very safe drug very early, in many cases, even before they've had a positive test, right?
And so the point is, this study looks at them Yeah, it's not the latest administration of ivermectin that we've seen, but nonetheless, these are people who are deeply into disease.
These aren't people who at the first hint of disease have been given this drug and are given it with a meal that has fat in it to facilitate its getting into the bloodstream, right?
And we have these methodological issues where it appears that sicker people are likely to have ended up in the placebo arm.
I mean in the in the ivermectin arm.
And healthier people were liable to end up in the placebo arm.
All of these things go in the same direction, and yet we still see a notable effect that even the author, the primary author of the study, says he believes indicates the drug works and that if they had added more people to the trial that it would have passed their statistical test too.
Now, in the article, and I'll leave it up to you if you want to go into that, I discussed two more factors that would have dampened the effects.
One is that the high dosing, as described, was not actually high enough.
We can go into that.
And the other one was the background use of ivermectin in Brazil at the time.
Yeah, these are tremendously important.
We should talk about them.
Okay, so let's go into the dosing.
So, remember what they had done with the low dose where they had put this arbitrary limit at 60 kilograms?
I have not found any literature anywhere that describes a weight limit for ivermectin for treatment of anything.
If you go to the NIH website and look for strongyloidus or whatever, It says scale it by your weight, full stop.
There's no like until.
Right.
So they had the 60 kilogram limit.
When they moved it up, moved up the dose, they changed it to a 90 kilogram limit.
Right.
90 kilogram limit, based on the average height, a Brazilian man, as I found, some reports say 171, some reports say 173.
That means that they're, given that, by the way, the trial had half low BMI patients under 30, that's not low, but anyway, and half, this is high.
Very high.
Yeah.
Now, around 30 BMI, around 174 height is 90 kilograms, right?
91 kilograms to be precise.
So, it's almost half the men in the trial, in the high-dose trial, would have had their dose start slipping under this 400 mcg per kg.
And I'm assuming about a third of the women because their BMI is higher for lower weights, so that would not have caught them at that level, but later.
And this pernicious effect where high BMI is high risk, right?
So, the higher your risk, the lower your dose.
This is weird.
What frustrates me with this is You want to answer the question for once and for all, right?
This is how the trial is heralded.
Why do stuff like this?
Why?
There's no clinical explanation from what I've heard from the authors camp, from people informally.
There's no real reason for this.
Why?
Right.
Why?
It's unthinkable, right?
And one can speculate, but there's no justification for it.
But if you were looking for a way to underpower a trial that was Somewhat subtle, that didn't show up in the, oh, well, what dose did they use?
Right?
If your first question is, well, yeah, I've seen these underpowered trials before.
What dose did they use?
Right?
And that threshold will be missed by many people, maybe many journalists.
It has no justification.
And the point is, because COVID is, or the severity of COVID, Is so significantly correlated to BMI, then the point is, this is a critical failure.
This is a critical failure.
The people who are likely to get sickest are the most underdosed.
Yep.
Right?
I can't explain it.
There's no, again, there's no medical justification that I've seen it.
It just makes no sense.
And in fact, there's something or some issue I can't quite describe.
I mean, if you think about comparing it to placebo, Right?
The point is the placebo is flat and the degree of underdosing goes up and up the heavier you are and the more vulnerable to COVID you are.
There's it's just it's an unthinkable failure.
And the thing is, okay, maybe they've got a great reason for that.
Let's hear it.
Yep.
No, I mean, there's not the literature there.
I think that one is one.
When I had a chance to ask a question of one of the authors of the paper, this is the one I asked, and I got told to, you know, follow the process and be professional.
Yeah.
All right.
There was another issue you wanted to cover.
Yep.
So, the other issue was that there was, we know, background use of ivermectin in the population.
In Brazil and in the particular region of Minas Gerais, the region of Brazil and Belo Horizonte, the city.
So we know there was a background use of American.
Why?
First of all, this might sound surprising to people, it was the official government recommended treatment.
So, now, this was being fought by the medical establishment.
So, the government went ahead and promoted this thing called KIT-COVID, which had, I believe, hydroxychloroquine, ivermectin, azithromycin, a few other things, maybe some vitamins.
And the best I could find is that 25% of the population used them at some point.
But it was being fought hard.
So just to give you a sense, a Brazilian investigator we both know that's looking into these trials, when I asked him, was this like Trump?
He's like, no, no, it was much worse.
So...
Just to understand the severity of the hatred that the establishment had for Bolsonaro, who was promoting this treatment.
And yeah, I mean, this is the tragedy of this molecule, right?
It's been sort of politicized to hell and back.
Anyway, so there was availability over the counter, first of all, of the medicine.
It was being promoted by the government as part of a kit.
Was it free?
I'm not sure how the kit was promoted.
You'd expect, but I don't know.
That would be worth knowing because obviously it changes the incentive of somebody to join the trial.
Right.
I believe the over-the-counter Ivermectin in Brazil is like $5.
And $5 for us is not $5 for Brazil, but it's not going to be, you know, the number is like, you know, it's not a hundred.
You know, you can scale it.
It could be the effect of $20 or whatever, right.
And for a low-income Brazilian household.
But that number is still, you know, if you're sick, you think this will help you, you would probably find it.
Anyway, so we also know from local press that was not friendly to Evermectin, that in the particular state of Minas Gerais, around the period of the study, especially around that sort of March-April timeframe, use of Evermectin had increased nine times over the background.
Nine times.
So significant use.
I've seen, I've actually seen myself, I've seen numbers that indicate from 2019, the raise is more like 20 times.
I don't have, I'm not at liberty to share them because they're proprietary numbers from a company that gathers them from pharmacies, but take whichever number you like.
Nine, 20, what does it matter?
A lot.
So when you say over background, you mean over the rate that people are using it for parasites?
Yeah.
Which was minimal.
It wasn't, like, I mean, if you see a graph and there's a 20 times difference, the low number will seem low.
But, yeah.
So, it was, you know, a big, big difference in the use.
So, we know there was background use.
I mean, you know, how else do we explain these numbers?
And, again, the local publication that said nine times **JASON LENGSTORF** In that same piece, it says there's concerns about safety, etc., etc., which is the usual talking points.
These are not people who are promoting it.
They're worried that it's nine times, right?
So when the original results came out in August, right, the sort of the one slide sort of presentation, they mentioned the exclusion criteria and they did not mention checking people were taking evermectin when they were added to the trial.
And this is bad because placebo group and treatment group might have a background.
Some of them might be taking evermectin, might not know.
The investigators did not say anything at the time.
This was a key criticism throughout.
You know, the drumbeat was like, you didn't put in the exclusion criteria.
Obvious.
When the paper came out, you know, the first thing I did, I go to the paper where it says exclusion criteria.
No mention of Evermectin.
It says, like, go here for more.
I go there for more.
No exclusion for Evermectin.
I'm like, Oh, wow.
It turns out there is a single sentence in the discussion of the paper, in some other place, which you don't put exclusion criteria in, by the way.
This is not normal.
That says, we screened extensively for use of ivermectin for COVID.
Right.
And the investigator, whenever he says, he says, for COVID.
And I'll go into why this is significant.
And, you know, we removed those people.
And he also mentioned in one of the emails, something that confused the hell out of me, which was like, if you were taking one of the drugs, you might be allocated another arm.
What?
Anyway, I choose to ignore that because the repercussions are so preposterous as to, you know, we already have enough to worry about.
But they basically say that to mean that, yes, we did exclude people.
Now, why is that not… Wait a minute.
They said they might have allocated people to a different arm if they were taking a drug in the exclusion criteria?
That's what there's a hammer mark in the email from Mills to Kirsch.
For example, then.
You might be comparing Fluvoxamine and Ivermectin to a placebo.
Wow, that raises all kinds of questions.
It's such a big thing that I just, you know, for some reason I'm... He might be... There's some part... I don't know.
I just don't want to go into that.
I choose to take it as an offhand remark that might have been...
Misunderstood or something.
We have enough on our plate.
But that was weird.
But here's why it matters, the way they articulated what they did.
So, when we look at the forms the patients filled in when they were added to the trial, you go through the exclusion criteria.
There's nothing about ivermectin.
I think there's a known hypersensitivity if you're allergic or something.
That's not what we're talking about.
But there is an appendix which is like concomitant medication.
So, other drugs you're taking, right?
So, if you're taking blood pressure meds or like whatever it is, they want you to write it down.
And under that, they have indication, you know, why you're taking it.
So, in theory, somebody would have written ivermectin.
Why?
COVID, right?
Not parasites.
And you could have caught it.
However, we're talking about a trial where the data collection was so compromised.
There's 331 patients, 23%, that they're not sure how long since symptoms started.
They're missing for a good, I believe it's a double-digit percentage, The age of the patients, right?
And again, you can make this make sense, right?
Early stage Italy, chaos, whatever.
You're doing what you can.
You're getting rough ages from people.
You're like, you look at adults, right?
Because it's like, if you don't know the age, theoretically, how do you know they're adults?
How do you know they can even be in the trial, right?
I can make that all of that makes sense, but you can't convince me of that and that they went and recorded all the medications you're taking and the precise reasons why you're taking them.
And the investigator says, and by the way, there wasn't much use in the area.
Okay, so you didn't catch like a ton of people. - So not much use in the area when we have other evidence that there was lots of use in the area.
A weird phenomenon shouldn't matter from the point of view of the experiment, whether you were taking it for COVID or not.
The question is, were you on the drug or weren't you?
And so the exclusion criterion, even to the extent that it existed, is bizarre from the point of view of actually testing ivermectin.
So, and then there's a Yeah, it raises all kinds of questions.
For me, it also raises a question, you know, we've talked about the unblinding of the trial from the point of view of the experimenter, which is dangerous, but it's also possible that the trial was unblinded either accidentally or not so accidentally from the point of view of the patients, right?
So, for example, you know, ivermectin has a Flavor to it and a patient might get the sense that they are either on the drug or on the placebo from that that could be an honest error, but my understanding from the paper is that they showed Patients in the trial a video of To sort of orient them, right?
A video that was presumably in Portuguese and discussed what the patient would experience, and yes, one could put together a perfectly well-constructed video that would leave things perfectly blinded, but if you were looking to wink at patients so that they would self-treat if they found themselves in the placebo group because they had joined the study in hopes of getting a medication that many thought worked, That could also produce an effect.
So, in some sense, among the many things that we would need to see in order to know that this trial was valid is, well, can we see the video?
Right.
There's even one more thing that is harder to characterize statistically.
It dawned on me as I was kind of discussing these results, which is that, say you have a background population of people in Brazil using Aromectin, and let's say Aromectin works differentially in people, right?
In some people it works, in some people it doesn't.
And so, let's say that the people that it does work don't go to the hospital, and the people that it doesn't work do go to the hospital, right?
And you are now looking at applicants.
So, even if you didn't catch it, even if you did exclude it, even if you did all the things, you are still looking at a subset of the population that could potentially be less suitable to that treatment, whatever.
But the fact that even if they did exclude, given the background use in the population, already You know, changes the meaning of these results.
And again, I don't know what precise statistical bias this would be, but it's one thing that continually frustrates me with the authors is that I think some of the things we have seen, not everything, but some of them could have been resolved with just more openness, right?
Being saying, look, we changed the dose because of this.
We changed it at that date.
We, you know, just be clear about what happened so we can move on and find the most, the more important things.
There's this general refusal to engage to, you know, like an adversarial relationship.
I can understand it even to a point.
I know there's people on both sides of this conversation that are very emotional.
They're very aggressive.
I can get, you know, I mean, you've definitely sensed it on the side of, you know, the pro side.
There's definitely people on the anti side that sense, you know, a lot of sort of aggression, etc, etc.
But we're doing science, right?
We're supposed to try to walk that out.
do our work as best as we can, and there's this adversarial sort of relationship between, they mention constantly sort of advocacy groups, and they even mention paramedical groups, which I don't understand what it is, paramedics forming groups?
Unclear.
You know, I understand the emotional sort of tension around all of this, but as scientists, we're supposed to Try as much as we can to put that aside and just come out with the results.
Come what may, right?
Reveal the world as clear as we can.
And the authors seem to be very aware of the context.
Let's just put it that way.
Some of these things might have been clarified, but we're not getting signal.
Right to the point that Mills discusses the pressure, right?
Yeah.
It wasn't Mills, but after this person said it, he said, I completely agree with Frank, it was Mills.
Mills said that, okay.
So, there is one final bit to all of this that has to be mentioned, which is that in these adaptive trials, there is this data and safety monitoring board, committee.
They come by different names, but what they do is perhaps as a safeguard, right?
Perhaps because all of this stuff is possible.
There's supposed to be an independent committee that, as the name implies, looks at the data, is primarily concerned about safety of the patients, and, you know, monitors sort of the trial to make certain decisions about where to stop things, looks at the interim analyses, you know, all these decisions that we've seen about sort of extension of certain things or cutting short.
Normally, you would expect them to go through this committee, which is supposed to be independent.
And, indeed, in their papers, they say, you know, it's an independent committee.
Okay.
You look at who's in this committee, and the chairman is a person called Christian Thorland.
Christian Thorland has published more than 100 papers with Ed Mills, who is the principal investigator.
They're academic sort of soulmates, let's call it.
More than 100 papers.
I mean, most people don't have 100 papers.
They published 100 papers together.
And I dug in a little bit deeper.
If you go to the TOGETHER trial website on the Wayback Machine, and you go to the first version, you will see that Mills and Thorland are cited as co-lead investigators of the trial.
So, Thorland is on the side of, you know, the people designing the trial.
If you look at where the email goes of saying, you know, where do I get information, it goes to a company called EmTech Sciences.
MTech Sciences is a startup, in fact, that was founded by Edward Mills and Christian Thurlund.
My guess is that MTech means Mills, Thurlund, Edward, Christian.
MTech, right?
But it might not.
But the thing is, it is a startup.
It was founded by them.
It was acquired by Citel, the company that does the statistical analysis for this Study.
Another person was working at EmTech and moved to SciTel as part of the acquisition called Jonas Hagstrom.
He's also in the Data and Safety Monitoring Committee.
And you're like, okay, so all of these people have very good reason to want the trial to succeed and produce whatever results Mills wants.
And they're unblinded, right?
So they see the data and they can make decisions on the data.
And they're not independent.
And two more of the members of the committee have got, you know, 25 and nine papers written together with Mills.
Less of a severe thing, but like when you already have two big pieces, especially the chairman of the committee, being linked, like, inextricably, I would say at this point.
Like, they started to start together.
They're working at SciTel.
They're working at McMaster University.
They've got 100 papers together.
You know, they're Like this.
They're soulmates, right?
The distinction is thin.
Now, okay, you might say, I don't know what you might say, this is not good, but in the open peer review that happened in August, two of the invited reviewers that asked about the power of the study were also asking, They made this note.
They didn't catch all of the context, but just to see that this person works at Citel, this person is at McMaster, same as Mills.
This is not good.
And they said, look, this isn't right.
And we are withholding our full approval of the protocol based on this one thing.
Mills comes back and said, I'm happy to take away his vote in the committee.
So they removed the vote, but kept him as chairman.
Right?
And didn't do anything for hacks.
So, the reviewers, on the basis of that, what they considered a lackluster response, said, I'm sorry.
This is not sufficient.
If they want a statistician, they can call him in the room to give him some advice and walk out.
He cannot be in the room when the discussion is happening.
Therefore, we are withholding our full approval of the protocol on the basis of the lack of independence of the Data and Safety Monitoring Committee.
Now, so we have, just to put it in context, right, we have all of these irregularities happening in the trial.
The people who are involved in many of those decisions, and who are supposed to be sort of the safeguard, the people who can say like, look, there was those other people that had nothing to do with the authors that made a lot of those decisions.
It's not the problem.
These people are very closely linked.
So, at that point, yeah, I don't know what's left of the trial and what we can make of the results because there's so much room for maneuver, right?
It becomes a matter of trust to the investigators.
And the whole reason we're doing RCTs in the first place is to remove bias.
So, what's even happening?
Right.
You are being too generous.
I can tell you on the basis of the underlying philosophy of science that the number of anomalies that we are We should be able from the description in the published paper to reconstruct what happened.
To the extent that that is not possible, we should be able to request information that would allow us to fill in any gaps that we cannot ourselves fill in based on what is published.
And to the extent that that is not provided, it is fatal.
Now it's not to say that they couldn't then come back with information that would explain what these anomalies are in some satisfactory way, but it is almost impossible for me to imagine What the answers would be that would cover all of these anomalies and to the extent that this was not randomized or placebo controlled in a valid way, that it may have been unblinded for researchers.
I think we know less about what happened with patients, a lot less, but that that possibility also exists.
This is not evidence.
Science depends on the work being done properly so that the conclusions that we derive are robust.
Right?
This is the deductive part of the process.
To the extent that the assumptions are not met, the deductive conclusion is not valid.
And we can even see that in the fact that what is reported is not even consistent with what is presented within the data.
So, it is not a valid study if it can't recover from these things and the refusal to try is conspicuous from the point of view of what sort of evidentiary value It has, I would say, it has zero value until these things are clarified.
You know, and I mean, this is my sense as well.
I find it interesting that its actual results, plus the context, match roughly what we'd expect.
You know, given all of the watering down, this is about, you know, what you'd expect to see.
But to me, the real To me, right?
It could be religious for others.
What animates me in this whole debate is that, you know, we can't have isolated demands for rigor.
We can't have rigor for thee, but not for me.
This cannot be allowed to stand.
We can decide different rule sets.
There's some play, right?
What exactly are going to be the rules?
We can discuss that.
What we can't discuss is that, you know, Henry Carvalho, a retired, you know, doctor in Argentina, is being put through the ringer for arguably doing a sloppy study.
But people who are very well funded with, you know, a staff and statisticians and, you know, a PR agency and the papers written, you know, being the best journals, get a different treatment.
Sorry, like, no.
This is where I draw the line.
Yes, and let me point something out.
You raised the Carvalho question.
Carvalho did a very sloppy study.
I do not believe there is any evidence that it was dishonest.
But I will point out that I had thought the Carvalho study was an important piece of evidence.
And when I asked for the data and did not get it, and discovered over time that what had happened was a tally, rather than the careful collection of data that would have immunized the study from various kinds of bias, there was an informal tally that was kept that suggested a very powerful result.
I went on to Dark Horse and I said, look, this has zero evidentiary weight.
It doesn't mean they didn't see the effect that they say they saw, but it means that we cannot look at this as a carefully run study that indicates what the conclusions that we've been given are.
So the point is that is the right thing to do is to look at a study that is compromised and say, because I can't reconstruct exactly what happened here, zero evidentiary weight, right?
That is the right thing to do with at least the ivermectin arm of the TOGETHER trial.
And I would argue, based on what you've revealed here in passing with respect to the fluvoxamine arm, etc., that we have to wonder about the whole thing.
Zero evidentiary value unless we can see that a study of robust methodology was correctly carried out and resulted in conclusions that match what's being presented.
That would have to be the minimum, and we're nowhere near that.
People should recognize the right thing to do, as it was with Curvaio, is to say, this is not evidence, right?
So, as I've said many times on Dark Horse, science is a very powerful process, but it is fragile.
It depends on the philosophy of science being robust.
It depends on the absence of powerful financial and other incentives pushing people towards some conclusions.
It depends on the equal application of rigor.
As you point out, all of these things have to be true in order for the evidence to be useful to us.
And this is so far from where we are with respect to ivermectin generally and this trial in specific.
That one really has to wonder how we get ourselves out of here.
We so corrupted our scientific structure that it is incapable of allowing us to see.
And of course, there's also the surrounding theater around this study, right?
There are not one, but three new cycles powered by the way in which the results were released.
So on August 6th, I mean, I find this kind of shocking, honestly.
The Evermectin arm is supposed to end on August 6th, the same day.
Like literally, I mean, same day, the speed is stunning, right?
It's August 6th.
They come out, you know, the deck is dated August 6th, with the results.
Okay, amazing, right?
Great.
And indeed for Flavoxamine by August 23rd, we've got the pre-print and, you know, a couple months later it appears in Lancet.
For Iraqtin, you know, quiet, no pre-prints, no data, nothing.
Then, a couple of weeks before its release, We get a Wall Street Journal exclusive, right?
Look, and people who had seen the study, not, you know, not me, not you, not, you know, somebody who might know a little bit more about, but it might be skeptical, but people who were, you know, helpful to the story that was being created, had seen the study and were making comments.
And then that Wall Street Journal triggered, you know, its own little, you know, waves.
And then two weeks later, again, The study gets released, New York Times headlines, you know, it doesn't work, blah, blah, blah, blah.
And of course, then it's matched with the calls that, at some point, we have to wonder, should we keep studying this thing?
Is it worth it at this point?
You know, we've got so much evidence, so incontrovertible, right?
Why are we even wasting research dollars on this question anymore?
You use that as a platform to sort of say, not only this evidence is final, but stop collecting evidence as a whole.
And that's where, again, you start to wonder, and it doesn't have to be the authors, but it could be somebody who is working with the authors or even just taking what the authors are putting out and amplifying it, what's happening?
Especially because when you actually see the data.
So, my advice to people, just on Twitter and everywhere else, when you see a study, Wait a couple of weeks.
Seriously.
Like, I've made this mistake.
Lots of us have.
You have to control your first instinct to just come out and make grand conclusions about a study.
Ask questions.
See if this is interesting, whatever.
But, like, hold something back for at least a couple of weeks.
Because once it gets examined – and this is, again, on the pro and the anti side, right?
It doesn't have to be about specific studies.
This is about all studies.
You have to wait at least a couple of weeks to see what emerges, because as there is the formal scientific process, these days there's a collective intelligence on the other side that is also absorbing, cross-checking, contextualizing things, and things emerge like this.
Again, the conclusions that we're coming to for this trial are as Implausible as they are undeniable.
It is just baffling.
All right, so the... I'm not sure exactly how to convey this.
It is non-standard to emerge into public with the conclusions of a study that is not going to be published for half a year.
At the very least, people should be able to scrutinize a draft at that moment to get a sense for where the authors are, even if the thing is going to take time to go through the pipeline.
To give us the conclusion, to sensationalize the conclusion, To delay our ability to scrutinize the work itself.
If all of those things are true, the work better be pretty close to spotless, right?
It should be exceedingly high quality work, not shoddy work, right?
If you're going to trumpet this in the Wall Street Journal, If you're going to gaslight us for more than half a year over the conclusions of this trial before we get to see how it even worked, the trial better be incredibly high quality.
Instead, what we have is, frankly, it's a mirror of the problem that we saw with cold fusion, right?
It's science by press release, by press conference, right?
That's what this was.
And You know, as with cold fusion, what actually emerged in the aftermath when we got to see the details wasn't supportive of the sensational headline.
So, this is in some sense a case of, you know, ivermectin and cold confusion, right?
What we've got is the purpose.
Somebody's purpose here is about headlines and not about discovery.
And the discovery here is, I would say, exceedingly low quality, right?
This is not high quality science.
It's certainly not worthy of the New England Journal of Medicine.
Hard to imagine how the editors themselves didn't spot the numerous inconsistencies and problems, let alone the peer reviewers.
How did this go?
A process that had an extra six months to get this stuff right should be especially clear and good rather than especially opaque and full of obvious holes.
Yeah, the authors indicated that the journal gave them trouble in publication, though they indicated as being exasperated that it didn't sort of make the obvious connection that maybe the journal was seeing something he didn't like.
But also, there's an indication that the corresponding author changed in February, which is strange.
Like, who is the... what's going... you know, this is not common.
It is uncommon, and I would also point out, okay, if the journal gave them a hard time, presumably the things that the journal gave them a hard time about were fixed before it went to publication.
That means that the many, many flaws, ambiguities, and dangers that are apparent in what they published are the things that got through.
Right?
So, the work was even more compromised than this, apparently.
You see most of the issues when you put Humpty Dumpty together.
When you take the Metformin paper, the Flavoxamine paper, and the Ivermectin paper, and you assemble them, is when the discontinuity is.
The paper, the Ivermectin paper itself, when read in isolation, there's still questions, there's still concerns.
But, you know, if you had nothing else, it would be hard to challenge it.
Definitely not at this level.
Yep.
Which, I don't know, is a fascinating condition, let's just say that.
Yep.
All right.
There's a lot in your article on this that we have not covered.
I would certainly encourage people to look at it carefully.
We're no doubt going to get feedback.
Maybe we'll discover that some of the issues that have been raised are less critical than we think.
Maybe we will discover that there are issues that we didn't discuss that are important and should be explored.
But at the very least, I would say It is beyond time for the authors of this study to hand over the information they have so we can clarify what actually took place and therefore what, if any, evidentiary value exists in this study.
Yeah, I think as we've kind of said before, even now you can sort of hold a, you know, some level of uncertainty as to what happened and why.
Some sort of, you know, superposition of theories, of hypotheses.
But I think what happens with the data is the razor that separates these entwined hypotheses for me.
Uh, what happened with the data and the lack of forthcomingness?
No, no.
What happens from now?
What happens from this point?
If the authors say, okay, no, you guys are wrong.
You know, here it is.
Um, you know, and here's how, here's why these discontinuities appear.
You know, we can discuss.
We could be wrong.
There's always that chance we should.
I don't understand how.
I mean, this is basic addition and subtraction that I'm doing.
I'm not doing statistics.
But there could be some, let's say, innocent errors that were misunderstood or whatever it is.
Or it could be something that like some kind of a confusion, whatever.
But if the data is not forthcoming, I think we have to note, right, that again, Carvalho is a retired doctor.
The people here, the sort of the roster of authors, is has an overrepresented group of professional clinical trial designers for big pharmaceutical companies.
This is what they do.
They don't mess up on studies by accident.
It's hard to believe.
And I don't want to go heavy on the conflict of interest part, right?
People can definitely assume that I can't prove anything.
What I can prove is that these people are supposed to know how to run a perfect trial, or nobody does.
Let us also recognize this is not This is a global pandemic and an important question about a medication that either does or does not save lives.
To the extent that you run a study, I don't care how expensive it is, I don't care how fancy it was, if it turned out that the method collapsed on you because of things maybe you couldn't foresee, you don't publish it.
Right?
To the extent that they ran a shoddy study, even if it was well intended, even if it was the complexity that got them in the end, you don't publish the study as if it's evidence if it isn't.
And if it is evidence, then you provide the data so that we can scrutinize it and not have to guess where the bodies are buried.
Somebody who was reviewing, I got his help reviewing a professor at the University of Canada, who reviewed the paper with me, just to sort of, again, I've certainly checked it with a lot of people, right?
The reason of my confidence, such as it is, is that I've done that.
But one of these people said, That, you know, the word pharmacon in Greek, not pharmacon, is a dual word, right?
It's poison and cure at the same time.
And I know this is in modern Greek, there's still that duality in the word.
And the point is that, you know, with knowledge of how to cure comes knowledge of how to poison.
So this is, to me, the key question, right?
Was the knowledge that the authors had used in a reverse fashion?
Was this a chaotic situation because of the pandemic and maybe a The polishing, airbrushing of things afterwards to make it look like nothing happened?
The data will answer that question, I think, pretty clearly.
Well, that's a fascinating point to end on because I guess the way to say it is we are told randomized controlled trials are the gold standard because of their power to reveal subtle patterns, but the knowledge of how to
Run a randomized controlled trial is also the knowledge of how to game a randomized controlled trial, and in an environment where there's as much at stake as there is surrounding ivermectin, with the number of anomalies we see, it is absolutely fair, and in fact it is our obligation to wonder what happened here.
All right, well, Alexandros Marinos, I want to thank you for doing this work.
I know you have a new baby and you're pressed for time and I, you know, this is not...
Your profession, so I really appreciate all of the work you put into sorting out what took place and writing it up in a way that makes very difficult material accessible and I want to thank you for joining me here and discussing it.
It's been fascinating.
I must say, thank you for having me.
Hopefully this will get us some answers and that this work was not just me.
I've tried to name several people that helped in that conversation.
There's a lot of people that helped that I can't name or I don't even know the names because this emergent sort of social media ecosystem that generated, you know, a lot of leads, some of them dead ends, but I feel like I'm sitting at the top of an elephant that was far, far greater than myself.
But yeah.
Yeah, a tremendous number of people worked to try to sort out what was going on.
And so, yes, I think it makes a great deal of sense to thank them for their hard work.
Some of them have been working from early on in the pandemic, collecting evidence.
Some of them are anonymous, I think, because they are afraid of what happens if they reveal themselves.
All right.
Anyway, it's been very interesting and I look forward to seeing what we learn from here on out.