Introducing too many secondary conditions in any trial is an invitation for the drug to fail safety and/or efficacy due to increased demands on both. And as we all know, a huge fraction of drugs fail in phase 3 already. Raising the bar further, without great care, will serve neither patients nor business.
Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.
This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?
Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.
The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.
https://en.wikipedia.org/wiki/Pharmacovigilance#Adverse_even...
First off it ignore the fact that if you include frail patients you’ll confound the results of the trial. So there is a good reason for it.
Second, saying “rate of SAE is higher than rate of treatment effect” is a bit silly considering these are cancer trial - without treatment there is a risk of death so most people are willing to accept SAE in order to achieve treatment effect.
Third, saying “the sickest patients saw the highest increase in SAE” seems obvious? It’s exactly what you’d expect.
Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.
Sure, but including frail outliers does not automatically mean you can generalize to the whole population. People can be frail for a wide variety of reasons. Only some of those reasons will matter for a given trial. That means the predictive power varies widely depending on which subpopulation you're looking at, and you'll never be able to enroll enough of some of the subgroups without specifically targeting them.
The results in the posted paper seem valid to me, but the conclusion seems incorrect. This seems like a paper that is restating some pretty universal statistical facts and then trying to use that to impose onerous regulations that can't and won't solve the problem. It will improve generalizability for a small fraction of the population, at a high cost.
> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
Of course they do. It's a good thing we have informed consent.
> Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.
If your primary claim is that data from non-frail people is not generalizable to frail people, then how can you claim that data from frail people is generalizable to non-frail people? If the trials for aspirin found that hemophiliacs should get blood clot promoting medications along with it, then should non-hemophiliacs also be taking those medications?
I'm thankful we can extract some amount of useful data from these trials without undue risk. It's always going to be a balancing act, and this article proposes putting a thumb on the scale that reduces the data without even solving the problem it's aiming at addressing.
A common reason for a drug (especially a cancer drug) going to trial is because other options have already failed. For example CAR-T therapies are commonly trialed on patients with R/R (relapsed/refractory) cohorts.
https://www.fda.gov/regulatory-information/search-fda-guidan...
> "In subjects who have early-stage disease and available therapies, the unknown benefits of first-in-human (FIH) CAR T cells may not justify the risks associated with the therapy."
Frail patients confound results. A drug may work great, but you’d never know because your frail patients die for reasons unrelated to the drug.
Second is obvious as well. Doctors know there are treatment alternatives (with the same drawback to trial design).
And I already touched on your third point. The alternative to excluding frail patients is not being able to tell if the drug does anything. In many cases that means the drug isn’t approved.
Excluding frail patients has its drawbacks, but it has benefits as well. This paper acts like the benefits don’t exist.
Sometimes studies are specifically for treatment-resistant depression, and I expect those studies are more likely to screen in participants with a history of suicidality, so I would recommend keeping an eye out for those if you would like to participate in clinical trials.
Society doesn't bear the cost of someone killing themselves? That can't be what this means, but it's hard for me to read it a different way.
> If someone with suicidal ideation is excluded from trials on moral grounds and ultimately satisfies those internal cravings, nobody is at fault.
If someone with suicidal ideation is included in trials where drugs may INCREASE those ideations and they kill themselves, then the trial is at fault. You're not actually contending that they should be included anyway because they'll probably kill themselves anyway?
I saw a new procedure available in Mexico for 8k for psychedelic treatment with Ibogaine. Still schedule 1 like MDMA in USA.
It looks like there has been a few MDMA trials for ptsd even though the FDA denied more widespread testing.
https://www.science.org/content/article/fda-rejected-mdma-as...
This would be called an "active placebo" and would certainly be documented.
It's common to find controlled trials against an existing drug to demonstrate that the new drug performs better in some way, or at least is equivalent with some benefit like lower toxicity or side effects. In this case, using an active comparison against another drug makes sense.
You wouldn't see a placebo-controlled trial that used an active drug but called it placebo, though. Not only would that never get past the study review, it wouldn't even benefit the study operator because it would make their medication look worse.
In some cases, if the active drug produces a very noticeable effect (e.g. psychedelics) then study operators might try to introduce another compound that produces some effect so patients in both arms feel like they've taken something. Niacin was used in the past because it produces a flushing sensation, although it's not perfect. This is all clearly documented, though.
If the risk is primarily due to, or made worse by, the disease being treated, wouldn't they want to join the trial?
The paper defines a population "at high risk of drug-induced serious adverse events", which presumably means they're also the most likely people to be harmed or killed by the drug trial itself.
Study design is sometimes optimized so only the "best" most enticing participants will actually be eligible, I've seen as low as 2% - 12% but frequently 50% randomization rates. Some studies also have 100 to 150 day screening period, a limited AND full screening period, etc.
Overly restrictive inclusion/exclusion criteria to super narrowly defined ideal populations hinders enrollment, causes a large burden to sites for prescreening and ends with trial results that fail to reflect real-world demographics.
The most pernicious of these problems is that women--yes, more than half the earth's population--are considered a high risk group because researchers fear menstrual cycles will affect test results. Until 1993 policy changes, excluding women from trials was the norm. Many trials have not been re-done to include women, and the policies don't include animal trials, so many rat studies, for example, still do not include female rats--a practice which makes later human trials more dangerous for (human) female participates.
[1] Sort of one citation: https://www.aamc.org/news/why-we-know-so-little-about-women-... There's more than this--I wrote a paper about this in college, but I don't have access to jstor now, so I'm not sure I could find the citations any more.
It will be one of those things future historians of medicine will judge our time harshly for in my opinion, and rightly so.
The patients self-report their own side effects, then the numbers go into the paper.
Are you suggesting the study operators are tampering with numbers before publishing?
No, but did you not read the posted article? Firstly, trials don't select participants unbiasedly. Secondly, many trials are not long enough for the side effects to manifest. Thirdly, I have enough real world experience.
https://www.fda.gov/safety/medwatch-fda-safety-information-a...