Takeaway: Grail's study/pilot with the NHS is a big deal, single cell sequencing will continue to grow, and reagentless sensors are coming...

Overview

On December 8, 2020, we interviewed Shana Kelley (Kelley Laboratory), a tenured professor at the University of Toronto with 20+ years’ experience in clinical/molecular diagnostics. We covered a lot of ground, ranging from circulating tumor cells’ utility to pan-cancer screening and next-gen diagnostics. We came away from the discussion more positive on single cell sequencing (TXG, NSTG) and the opportunity for liquid biopsy to become an even more useful tool for oncologists. Additionally, we think Illumina is well-positioned to benefit from nearly every tailwind we see: NIH grants, capital flowing into startups, and a return to in-person care. When it comes to Grail vs. Thrive and the pan-cancer opportunity, the data are good enough to warrant the economic studies, but there’s a bit more uncertainty. A few highlights:

  1. The news that the NHS is going to give Grail a shot at a 165k patient study/pilot is big news for Grail/Illumina and the Galleri test. The data generated from the study will be incredibly valuable.
  2. Thrive may benefit from being under Exact Sciences’ umbrella when it comes time to commercialize its CancerSEEK test.
  3. With the ability to do single cell sequencing, circulating tumor cells have become more interesting again. Research effort around understanding what tumor cells do, protein expression, etc. will drive demand for both TXG and NSTG.
  4. Next-generation diagnostics in development will allow for the testing of all sorts of analytes in saliva, interstitial fluid, and tears. We will see some very cool, economical technology emerge in the wake of the COVID-19 pandemic.

CLICK HERE for the video replay.

CALL NOTES

Callouts from Dr. Kelley's experience:

  • Director precision medicine initiative, lab w/ about 30 graduates, post-docs, etc. PhD chemistry - post-doc Scripts, focused on molecular biology, last 20 years she’s run labs focused on the development of new technology for disease diagnosis, more of an engineer than a chemist, building out devices. Continue to focus on diagnostic infectious disease, liquid biopsy, and more recently, platforms for cell therapy development, drug discovery, broad range. Founder of few life sciences startups, w/ some commercial experience.

History of circulating tumor cells - why this was an area of growth that faded, and why it’s made a comeback...

  • In the 90s, the technology became available to visualize those cells and accurately innumerate/count them. There was a lot of interest initially in trying to use this as a tool for the management or maybe diagnosis of cancer. Interest intensified in the 2000s… engineers got involved, built more sensitive devices to accurately count, etc. After all that work, we realized that a count isn’t that valuable of a measurement (it tells you how long a patient has to live, how advanced cancer is, but is not clinically actionable). Interest waned because it’s not the information people want.
  • Then, cell free nucleic acids, circulating DNA gained traction. First in prenatal diagnostics - fetal DNA in a mother’s blood; as the approach progressed, it was hypothesized that it could be a mechanism to screen for, diagnose, and/or manage cancer. The first way - tumor mutation detection - helped to classify the molecular profile of tumors based on liquid biopsy (a blood sample vs. needing tumor tissue). It now looks like we’ll be able to screen for cancer - liquid biopsy is groundbreaking/game-changing.
  • What happened recently in the circulating tumor field, behind the scenes, is that we’re really good at looking at - i.e., characterizing and sequencing - single cells. We can gather a huge amount of molecular level information from them (information that DNA doesn’t provide - phenotype, how high is the expression of a given protein). A wealth of information is available: what is the tumor doing, how is it interacting with the immune system, etc. The circulating tumor cell field is resurging given the need for phenotypic information.

We’ve focused on 10x Genomics, but NanoString is there too. Do you think there will be more research there?

  • Absolutely. All the new, targeted therapies are aimed at particular proteins. Flying blind won’t be ok anymore when trying to treat patients. We need information on what the tumor is doing - protein expression is the only way to get at that, and it’s measured based on access to/analysis of those cells.

Do tumors constantly shed whole cells? Is that very routine or different?

  • It varies, but all tissue gives off cfDNA. We have even more if we go out for a run b/c tissues release DNA constantly. As a tumor progresses, it releases more nucleic acids into the bloodstream. The “trick” is to tell them apart - that’s where tumor mutations come in.
  • It’s definitely the case that more advanced tumors = more shedding, more mass = more shedding. The challenge is to get to very early cancers where the tumor is small and there’s not a lot of mass.

Do you see any difference in the technologies w/ how ctDNA is captured/measured? Is one tech better or is there something in the pipeline?

  • To date, we’ve relied on sequencing, processing samples, standard techniques to extract DNA… putting it through an Illumina box or other sequencing technology.
  • The big open question - as we scale up tests - is if sequencing will be financially feasible. Will there be an economic return on a test that costs $1k? I don’t think pricing is determined yet, but there’s not a huge amount of tech looking at automating it, other than sequencing.
  • If we’re going to screen everyone for cancer, we must have a streamlined way to do it.
  • You need big dollars to drive this kind of development - there’s a lot of interesting tech out there, but money poured into Thrive and Grail, and less money has been spent on tech development.
  • Regarding the underlying sequencing technology, Illumina is baked-in for some time to come. They are hard to compete with. The technology is really good, and I would not try to displace that.
  • Nobody wants to stray too far from what’s been done and introduce a new level of uncertainty or risk. Any new tech must be taken to the FDA, justified, etc.

What role do AI and the algorithms play?

  • As the data accumulates, then AI algos can be developed to make the best of what’s in all the data - we’ll get better at turning sequencing-level information into a diagnostic test result.
  • Where the circulating tumor cells come back in = trying to get a handle on what a tumor is doing at the molecular level. A lot of the new drugs are trying to enhance the interaction between tumors and the immune system. We must know exactly what proteins are on the surface of the cell - there’s more justification for looking at intact cells. There’s the immune cells - liquid biopsy doesn't have to be tumor-derived material. We could be looking at T cells, what types of PCR around, what neoantigens are interacting with - that’s another angle for liquid biopsy that’s behind everything, but could be very powerful as well.

Big Data - how many people in your lab have python, R, etc. skills?

  • These data sets are immense - we do a lot of work with CRISPR technology, whole genome screens to look for drug targets. You have to have the analytical tools to be able to comb through it. Most students come in and learn these things on the fly. Some go on to machine learning and developing algorithms. Sequencing is normal now, and bioinformatics is becoming part of the tool box.

Do you have a view on Adaptive and immune mapping?

  • It’s very interesting. There’s been a huge amount of activity around the development of T cell-based therapies. We know how to manipulate, transfect w/ genes specific for the tumor, put the cells back… there’s been an explosion of work in that area. If we know how to manipulate T cells, maybe we can develop off-the-shelf T cell-based products that can inform things to look for in liquid biopsy.

In terms of bioinformatics, understanding the full complement of what all the mutations mean and unbelievable variation you can get - do you have a sense for when AI [and liquid biopsy for screening] becomes much more important/routine, part of the clinic?

  • It’s all about getting the data set big enough to power that. The AI only works with a large, large data set.
  • The number of therapeutics coming through pipelines is immense. We are giving clinicians more treatment options given patients’ molecular level profile (of the tumor).
  • Been a struggle - people relying on tumor mutational burden, rather than particular mutation.
  • How long for markers to come around beyond progression free survival or overall survival - so the treating oncologist, regulators, etc. are comfortable with everything?
    • Historically, these things have taken a long time. Clinicians in oncology are somewhat conservative… they want to see data, make sure it’s foolproof to give patients the best care possible. At the same time, w/ new cfDNA tests, they are being looked at pretty quickly.
    • The NHS in the UK is looking at piloting this in 2021 (the Galleri test), adoption could be rapid (165k patients). The ramifications are immense if we can shift the stage at which patients are diagnosed. If we take ⅓ of patients and move them up a stage, it’s possible to see a 25% reduction in the death rate. That’s game-changing for individuals and the health care system. Tools to help diagnose cancer earlier will get adopted.

Grail, Thrive, and Freenome - can you share your thoughts on pan-cancer early detection and looking for that needle in the haystack using a wide net? Or is it more likely that we end up with a more focused approach?

  • Grail and Thrive are competing efforts to do more or less the same thing - enable early detection through screening. It’s a huge unmet need. We have made so little progress over decades. We have been taking away screening modalities and not replacing them (holes punched in PSA for prostate, various views on mammography, etc.).
  • Thrive and Grail are honing in. I’m blown away by the data. When Grail first took off and raised a $1B Series B there was a lot of skepticism, but that amount of capital was unheard of [for a Series B round]. Grail had to pivot and focus on methylation patterns as most informative, but the data is compelling. They can get a signal that looks like it’s worth measuring, and the same is true with Thrive.
  • I think we’ll get there - pan-cancer tests are the ultimate and would make it cost-effective to widely deploy.

Could you blend w/ underlying genetic markers? If you get a pan-cancer test every year, three years….

  • Testing everyone every year would be even better. Cancer is treatable when localized, so if you can do a surgical intervention, get in there with radiation, or otherwise limit spread, it’d be great. It’s metastatic disease that kills patients in most cases, not the primary tumor. If we can pull back the metastasis or slow it, then we can have a dramatic impact for patients.

What’s the right sensitivity and specificity for pan-cancer early detection?

  • Ideally, we’d like the performance metrics to be as high as possible to be effective but not generate false positives.. Performance is key. Very often, the first prospective studies look great and data tails off in the general population. That remains to be seen.
  • With initial studies, the health economics work can get going. What is the performance? How many cancer cases are caught? How much money is saved? How many lives saved? With the performance reported now, I think the answers will be positive.

It takes time for diagnostic companies to gain coverage (reimbursement) - Grail is launching in 2021. Are we 3-5 years away from routine use? What are you looking for to get it over the line…

  • The big investments have been made. The NHS study kicks off soon and the idea there = collect data, have it rolled out by 2024. I think the data is good enough to justify expediting this.
  • Right now, we have almost nothing and people are being diagnosed w/ late stage cancer (i.e., lung cancer is mainly diagnosed in stage 4) and others are symptomless until it’s too late.
  • The rollout can start with a risk profile that makes sense - family, behavioral. When prenatal tests came on there was the same debate - adoption went pretty well and moved quickly from 2007 -> 2009. There’s every reason to fast track it.

Grail “pivot” - can you elaborate?

  • I think it was initially assumed that we were going to be able to screen based on tumor associated mutations, but that signal is “tricky” ... I think Thrive managed to do it w/ their panel because of proteins incorporated. On tumor mutations alone, it’s hard to get the early ones.
  • Why is that? Early tumors have similar mutation patterns, no?
    • They evolve. More advanced tumors tend to have a larger mutational spectrum and more DNA is being shed. Those two things make it harder to get at early-stage tumors based on mutations alone. Pivot made was to broaden the scope and add methylation, and Thrive added protein markets.
  • Freenome has a stronger AI component, which could be very powerful. But a large data set is needed powering the analysis.

Is there an element of winner-take-most?

  • It’s an interesting question. It will be difficult to replicate the amount of money Grail raised and spent. Having a link to lllumina - being back in the fold - makes them tough to compete with. Eventually, the diagnostic paradigm must change. If someone comes forward with a test you can self-administer at home, and consumers can keep track of circulating DNA as digital health takes off, that could change delivery.
  • If Grail and Thrive put the $s in and get something out there that makes a difference, let them have that advantage for a few years. Then, we can come through with something better, more cost-effective, with a strong rationale.

Compare/contrast Grail & Thrive?

  • Composition of panels, inclusion of proteins vs. methylation markers.
  • Next phase - Thrive as part of Exact Science and Grail part of a sequencing company that’s evolving into a clinical diagnostics company. As the focus shifts through clinical validation and on to sales and marketing, there may be a bit of a competitive advantage due to where the technologies are parked (Exact has the commercial channels).

Minimal residual disease (MRD) monitoring - how about the opportunity for treatment selection or MRD and detecting how a patient is doing? Do you have a view on Natera or other companies/techniques to watch for?

  • It's a different set of problems. The tech is different, it’s all about sensitivity to be able to see recurrence early. Employing the high-powered PCR-based approach is the way to go vs. bulk sequencing. MRD is another important capability - test and know if your patient is ok, no recurrence, or to be able to intervene early if there is recurrence… there’s lots of rationale for it. I think the newer PCR-based tests will win there - PCR has been tough to beat.

Other ways of detecting DNA, COVID highlighted some of that. You’ve been a part of a company developing a new test. Can you tell us about how tech is evolving?

  • When the pandemic hit, I was horrified by what we’re using to enable diagnosis, and we were scrambling for the first two months to get lab tests to keep up with volume required.
  • We just touched on PCR’s positives, but it was invented in 1983 and we haven’t come up with anything better. That’s the first year personal computers were developed. Relying on PCR to power tests for infectious diseases is scary. If this were a more deadly pandemic, we would have been in big trouble.
  • One thing we do well in my lab is develop sensors - electronic devices that can tell you about the absence or presence of something biological. Historically, we must add reagents or chemicals to light up. There’s a class of reagentless sensors - wait for a protein, viral particle, or a piece of DNA - that give off an electrochemical signal (antibody binds the spike protein, slows it down, and can see signal change). It’s potentially game-changing - for anything you can develop an antibody for.
    • We developed a sensor that is put in saliva - or in your mouth - and it tells you whether there are SARS-CoV-2 particles present. This is the kind of thing that’s needed. We’re not trying to sample someone via nasal swab, send to a lab, etc. We won’t be ready to help w/ the pandemic, but the door is open to change diagnostic testing. Letting people test themselves at home - not on the sidewalk or at a clinic.

Telemedicine, remote monitoring - restricted to where you can get good diagnostic signal - meter, is there a limit to this technology/concept?

  • Started with saliva, but interstitial fluid would be great to go after. Tears are another one - good biomarker populations in tears (perhaps a contact lense-type device). Cardiac biomarkers would be good to test at home (e.g., troponin).
  • So a G7 or FreeStyle - you could use them and expand to other analytes?
    • Yes, reagentless sensors could bring biomarker data into remote monitoring strategies.
    • It’s not that crowded of a space, yet. There are some enzyme-based sensors built into wearables, marry with ion sensors or other solid-state sensors - we see a lot of activity in engineering, less on the chemistry sensor side of things...the world needs this kind of capability out here.

Privately funded players have made big advances - Google’s protein folding (AlphaFold) - is there adequate cross-pollination between academics, grant review, etc.? And, has COVID/Operation WARP Speed changed things?

  • I was on a panel and the founder of Carbon said, “To get a grant proposal reviewed and funded in the US or Canada, you must get everyone on the review panel to like the idea.” All it takes is one person to kill it. If you’re starting companies or have an idea in the private sector, you just need to convince one person that it’s a great idea. There’s not enough free flow of ideas across the industry.
  • With Operation WARP Speed - absolutely. The biomedical sector has saved us from imploding. The impact is immense. It’s bringing more resources - incredible IPOs (Canadian companies). There’s a public recognition, why biomedical research is important. Investors are seeing large exits. That said, I think we succeed because of basic research going on in the background.
  • We’re doing this call on a historic day - the first approved vaccine being administered in the UK. Humans did that - we can counter the threat that the pandemic put upon us.

Speaker Bio

Dr. Shana Kelley is a Distinguished Professor of Biochemistry, Pharmaceutical Sciences, Chemistry, and Biomedical Engineering at the University of Toronto. She received her Ph.D. from the California Institute of Technology and was a NIH postdoctoral fellow at the Scripps Research Institute. Her research interests are the development of new technologies for clinical diagnostics and drug delivery. Dr. Kelley’s work has been recognized with a variety of distinctions, including being named one of ‘Canada’s Top 40 under 40′, a NSERC E.W.R. Steacie Fellow, and the 2011 Steacie Prize, among others. She is an inventor on over 50 patents issued worldwide and a founder of three molecular diagnostics companies: GeneOhm Sciences (acquired by Becton Dickinson in 2005), Xagenic Inc. (acquired by General Atomics in 2017), and Cellular Analytics.

Please send us questions or feedback at .

Thomas Tobin
Managing Director


Twitter
LinkedIn

Justin Venneri
Director, Primary Research


Twitter
LinkedIn

William McMahon
Analyst


Twitter
LinkedIn