Remember this campaign? With the benefit of nearly a decade of hindsight, we learned a lot from it.
Almost a decade ago, you helped us crowdfund a study of a small molecule called anle138b in mouse models of genetic prion disease. Today we posted a paper with all the results [Vallabh 2022]. This blog post will explain what we learned — and why now.
The story starts eons ago, in 2013. Sonia and I were a year and a half into our quest, having quit our old careers and found jobs in labs at Mass General. I had started blogging, we read papers voraciously, and we had attended the Prion conference in Banff and heard all about the latest and greatest research. A small molecule found in a phenotypic screen, anle138b, had doubled survival in prion-infected mice and seemed like the most promising new development [Wagner 2013]. At the same time, a small molecule called IND24, which had also improved survival in prion-infected mice, had proven to work against a common strain of laboratory prions, but not against human prions, raising the specter that a drug could be “strain specific” [Berry 2013]. We decided the next experiment needed to be to test anle138b in mouse models of genetic prion disease — mice with mutations like Sonia’s — and see if it was effective. George Carlson at McLaughlin Research Institute and Jim Mastrianni at University of Chicago were willing to do the studies, Armin Giese, who developed the drug, was willing to provide it, and Walker Jackson, who developed two of the mouse models, was willing to send them. Taking great leap into the unknown, we launched a crowdfunding campaign to raise money to make the experiments happen.
It’s difficult to overstate the impact that campaign had on our trajectory into science. Back then we were just grappling for any foothold in our new careers. We had managed to land day jobs working in scientific labs, but actually devoting our time and energy to research on prion disease, and on therapeutics in particular, was still a daydream. Our crowdfunding campaign on Experiment (back then called Microryza) was the first proof that we could actually make an experiment happen. It was also our first highly public push, and putting ourselves out there on social media, announcing our quest to the world, was exhilarating. The funding campaign, together with D.T. Max’s beautiful article in The New Yorker, led us to reconnect with childhood friends and make new contacts with people from across the biomedical ecosystem, some of whom would prove to become long-term allies. We were invited to give a talk on our crowdfunding experience at the Broad Institute’s annual retreat. A skeptical but deeply engaged Eric Lander sat in the front row. When the talk was over he said, “this sounds like a strategy to raise ten to the fourth dollars, but you do realize that to make a drug you’ll need at least ten to the seventh dollars.” He agreed to advise us on our longer term strategy. It was the beginning of an incredibly fruitful and absolutely vital mentorship relationship that eventually led to our series of meetings with FDA.
How the actual experiment turned out is a more complicated story. We’ve previously shared that in the A117V mouse model at University of Chicago, the treatment seemed to lower the amount of plaques in the brain, but didn’t improve survival — a negative result which that lab finally published a few years ago [Qin 2019]. The D178N and E200K mouse models were an even longer road. Because our primary endpoint was a live animal imaging readout, the mice had to be imported, crossed to a bioluminescent transgenic line, and “homozygosed” again, for about a year of run-up. Then it was a long experiment, with animals followed out to 20-24 months of age, and after that, there was still tissue processing and histology to be done. The answer that emerged was that we just couldn’t find a clear disease endpoint in the mouse model. They didn’t develop any overt illness, lose weight, or die prematurely. The bioluminescence readout never went up. Pathological changes in the brain were present but not always obvious or easy to quantify, and in any case, you could only look at the brains at one timepoint, the end, which meant that you could miss an important difference in when those changes set in. All this meant that we had no way to evaluate whether anle138b was effective.
By the time the last of the data were collected in early 2017, the world had changed. Experiments done elsewhere showed that anle138b wasn’t effective against sporadic CJD MM1 prions, the most common subtype in humans [Giles 2015]. This meant that anle138b probably didn’t have a future in clinical development for prion disease, and Dr. Giese was pivoting to develop it for Parkinson’s disease and multiple system atrophy instead. Indeed, anle138b was just one of multiple once-promising small molecule drug candidates that had been effective in wild-type mice infected with mouse prions, but had failed in humanized mice infected with human prions [Berry 2013, Lu & Giles 2013, Giles 2016]. Thus, the whole strategy of finding antiprion small molecules in phenotypic screens in mouse cells had to be called into question. Sonia and I, by then graduate students, were shifting our focus to lowering PrP. George Carlson, who had been PI of the study at McLaughlin Research Institute, had moved to San Francisco, and Zou, one scientist who had done a lot of the work, had moved to Bozeman. I always felt that there was an important lesson wrapped up in the entire study, and I wanted to publish it, but it was a tough sell.
So why, on Earth, are we publishing it now? Sometimes a piece of data takes years to crystallize into its true meaning. In the time since the study took place, I find that I actually refer to it often, usually for the following lesson we learned. Just because a mouse model exists, doesn’t necessarily mean you can do a drug efficacy study in it. It is similar to the case we made recently that even though non-human primate models of prion disease exist, that doesn’t necessarily mean it’s a reasonable proposition to test a drug in one [Mortberg 2022]. Many people have asked us whether the benefit of lowering PrP with an ASO would be same/better/worse in a spontaneously sick prion disease model compared to an inoculated model. And while I believe inoculated models are authentic models of prion disease, it is true that they do bypass a key event: formation of the initial prion seed. That formation process is obviously PrP-dependent, but what exactly the dose-response curve looks like, how much you need to lower PrP to delay the process by what amount, is something you can’t answer in an inoculated model. So in thinking about the Animal Rule as I discussed in that primate post, the question becomes, do spontaneously sick models need to be a part of the package supporting approval of a drug? And in these debates I realized I was holding in my head a key unpublished piece of data: the results of that anle138b study in D178N and E200K knock-in mice.
With this inspiration, we and Deb Cabin at McLaughlin Research Institute went back and exhumed — that’s really the best word for it — the data from all those years ago to finally write up our learnings. Scientists reading this who have ever tried to dig up really old data in their own lab may appreciate how non-trivial this was. A barrage of emails were sent back and forth, figuring out where each piece of data lived, confirming various methods, recalling the rationale behind certain study design decisions. Deb was heroic in re-imaging histology slides and hunting down all the old spreadsheets and protocols on shared drives and sometimes on computers still connected to lab instruments. We cross-checked animal IDs between different versions and different sources of data, to make sure everything checked out. Amazingly, in the end, we emerged with a basically complete picture, and we were able to write up a manuscript on it.
The main conclusion is that not every model works for every research purpose. For a therapeutic efficacy study you really a model where the disease endpoint is quantitative, easily and reliably measured, tightly enough distributed that it’s easy to power a study, and ideally, quick. There are lots of models that can make valuable contributions to basic science without meeting all those criteria.
I think that’s a valuable lesson. Sure, there are things I’d do differently, but there are always things you’d do differently once you know the answer. Looking back, it’s hard to have any regrets. Around when the study was getting started, I connected for the first time with Kurt Giles, the scientist who had led a lot of the drug development efforts at UCSF and who became an important mentor to me. When we talked about the anle138b study, he said it was interesting but he also warned, “what will you do if the results are negative, or worse, uninterpretable?” Fair point, and excellent foreshadowing! But that’s also why we call them experiments. If you already know the results will not, cannot possibly, be negative, then why are you doing it? One might choose to stick to safer models and safer experimental designs where the risk of an uninterpretable result is lower — and to be sure, most of the time, we do. But that’s a tradeoff too, and a lot of progress comes from taking a leap on experiments that might not even prove interpretable at all. In any case, here we are in 2022, and people still ask us all the time whether we should be testing a PrP-lowering drug in a spontaneously sick model, so I think this is a limb we were bound to have to venture out on at some point. Moreover, looking back at our vantage point in 2013, anle138b was the most exciting, promising new thing out there, and trying to gain insight into whether it could be useful in genetic prion disease was absolutely a priority.
To our donors who funded that study in 2013, we owe you a huge thank you. The study you funded brought us a valuable lesson, one we carry with us in our efforts to develop a drug for prion diease. Efforts which are way more advanced now than we could ever have even dreamed in 2013, in part because you bet on us and let us launch the first experiment — the first of many, many experiments.