by Paul Whittaker
•
4 February 2020
The reductionist target-driven approach to drug discovery, fuelled by sequencing of the human genome, omics technologies and genetic studies has not been as successful in generating new therapies as was initially hoped. Sixty percent of drugs fail in clinical trials due to lack of efficacy, because the underlying therapeutic concept is flawed. This weakness in hypothesis generation is due to gaps in understanding of the underlying human disease biology and drug target validation. So I was interested to attend the ELRIG Drug Discovery 2019 conference entitled “Looking Back to the Future”, held at the ACC in Liverpool on 5-6 November 2019 and catch up on the latest thinking and approaches to tackling these issues. With 8 topic-specific tracks across two days, plus plenary talks, poster sessions and an exhibition featuring 100 companies showcasing their latest drug discovery aids, I was only able to attend a selection of what was on offer. So in this post, I will be concentrating on the talks I attended in sessions dealing with artificial intelligence, cellular models of disease and biomarker strategies in drug discovery. But first, I’ll start with the three plenary talks by Mene Pangalos ( AstraZeneca ), Fiona Marshall ( MSD UK Discovery Centre ) and Melanie Lee ( LifeArc ), who each gave their perspectives on the current issues faced in the discovery of new drugs and how improvements might be made. Plenary Talks Astra Zeneca’s 5Rs framework has already resulted in a 4-fold improvement in clinical trial success rates. In the first plenary talk of the conference, Mene Pangalos explained how AZ aim to improve on this, by rigorous drug target selection and validation using data science and artificial intelligence , as well as technologies such as CRISPR and multi-modal molecular mass spectrometry imaging . Artificial intelligence, in particular, is being leveraged across the drug discovery process in a number of areas in an attempt to make the design-make-test-analyse (DMTA) cycle more efficient and effective. AZ are also expanding the number of therapeutic modalities beyond the trinity of small molecule, antibody and peptide approaches, to include anticalin proteins , proteolysis targeting chimeras ( PROTACs ), antisense and bicyclic peptides , amongst others. Neurodegenerative diseases such as Alzheimer’s disease (AD) have been particularly challenging for the development of new drugs. Only 2 classes of drugs are currently approved for therapeutic use in AD ( acetylcholinesterase inhibitors and NMDA receptor antagonists ). These drugs are able to lessen symptoms (e.g. memory loss and confusion), but are not disease modifying. Fiona Marshall explained how lack of progress in developing new AD therapies is largely due to poor mechanistic understanding of AD, as well as poor predictably of disease models. Drugs based on the genetics-driven amyloid hypothesis have failed to show efficacy in clinical studies , and a recent report suggests that high levels of brain amyloid alone are not sufficient to cause AD. As a result, clinical trials testing possible interventions aimed at other drug targets are currently in progress. Whether the failure of trials of anti-amyloid drugs was due to selecting the wrong drug dosages, the wrong patients, or other reasons, is unclear. However, future success will require biomarkers , neuroimaging and brain activity monitoring for testing drugs with the right mechanism of action in the right patients at the right stage of the disease. The translation of drugs from pre-clinical to clinical testing is clearly an inefficient process that will undoubtedly benefit from well validated therapeutic opportunities. However, Melanie Lee cautioned that, in addition, future products will also need to carry richer data packages, including information on which patient sub-groups to target, as well as companion diagnostics. There will also be an emphasis on diagnosing patients earlier in their disease course, as current points of intervention tend to be late in the disease trajectory. So, in addition to targeted interventions, surveillance screening will be very important. For example, Oncimmune’s Early CDT-Lung test can detect lung cancer 4 or more years before clinical diagnosis. Future improvements in the diagnosis, treatment and outcomes for patients may also come from using crowd sourcing approaches . Cellular Models of Disease The lack of preclinical models that faithfully mimic key aspects of human disease biology in patients has long been an Achilles heel of the drug discovery process. The Holy Grail is to have models that are more capable of predicting clinical success and drug side effects. Organoids derived from adult stem cells, differentiated embryonic stem cells, pluripotent stem cells (iPSCs) and precision genome engineering via CRISPR, offer new opportunities for the generation of diseased and healthy cell types that mimic at least some aspects of the disease in vitro . There is a lot of excitement about using patient-derived iPSCs to overcome the constraints of limited access to viable human tissue and poorly translatable animal models, by enabling the generation of large, reproducible quantities of biologically relevant cells from healthy and diseased individuals. Paul Andrews ( National Phenotypic Screening Centre ), reviewed how phenotypic screening by high content imaging of organoids and iPSC-derived cells is being used to marry “old style” (physiology-driven) and “new style” (target-driven) drug discovery approaches. Phenotypic screening makes no assumptions about the target and limited assumptions about the mechanism of action. The use of iPSCs in phenotypic screening will be aided by: the development of best practices for iPSC disease models ; mapping cell phenotypes to genotypes with single cell genomics ; studying how genetic variations affect cell behaviour by integrating different omics data sets from human iPSCs ; developing well characterised collections of iPSC cell lines for the research community and; developing a collection of cellular reference maps for all the cell types in the human body. There are no effective therapies to treat Glioblastoma (GBM), which is the most common type of brain tumour. Surgery, radiotherapy and chemotherapy, even when combined, only increase survival by a year, on average. Developing clinically effective treatments has been a challenge, despite increasing genomic and genetic knowledge. Steven Pollard ( Centre for Regenerative Medicine, Edinburgh ) discussed how patient-derived models, genome editing and high content phenotypic screening are being used to accelerate drug discovery for GBM. GBM stem cells (which have molecular hallmarks of neural stem cells) and non-transformed neural stem cells have been used as patient-derived models to identify tumour-specific vulnerabilities via genetic screens, or cell-based drug discovery. In addition, the glioma cellular genetics resource is generating a toolkit of cellular reagents and data to expedite research into the biology and treatment of GBM. Wendy Rowan outlined GSKs approach to developing fit-for-purpose cellular models, by scoring models against sets of criteria, so that the most appropriate model(s) can be selected for the research question(s) being asked. Full characterisation of cellular models with respect to how well they model healthy and diseased human tissue physiology using “due diligence checklists” is now seen by GSK as being key to improving drug discovery. For any given drug target, several cellular models may be used to progress the target from validation to candidate selection. GSK are developing cellular models based on organoids , iPSCs and even assessing organ/body-on-a-chip approaches, based on microfluidic technology. Artificial Intelligence (AI) and Machine Learning (ML) As mentioned earlier, AstraZeneca are incorporating AI throughout the drug discovery process. Werngard Czechtizky explained how AZ are incorporating AI into medicinal chemistry by developing algorithms for reaction/route prediction, chemical space generation and affinity/property prediction for low molecular weight compounds, in the first instance, before potentially expanding out to other therapeutic modalities. The aim of doing this is to reduce costs, time, resources and the number of compounds tested (from around 2000 compounds to less than 500) in a 2-3 year time horizon. In terms of hit to lead optimisation , ML is being used for augmented design, predicting synthesis, analytics, and automated DMTA. The extraction of biologically meaningful signals from large diverse omic data sets for target discovery is a major challenge. Michael Barnes ( William Harvey Research Institute ) described how ML and AI are being used to support drug discovery and drug repositioning from genome wide association study data using a tensor-flow framework. Over a thousand genetic loci affecting blood pressure have been identified . These data have been used to teach a tensor-flow algorithm to identify new BP genes. In human population genetics, ML is being used to identify benign human knockouts from exome sequencing data , as potentially safer drug targets with fewer side effects. In personalised healthcare, ML is being used to develop multi-omic predictors of response to biologic therapies . Biomarker Strategies for Drug Discovery Oncology leads the field in the development of biomarkers for drug development and clinical testing. Development of biomarkers for other disease indications lags behind, facing challenges ranging from sample access and quality, to the resolution and sensitivity of detection technologies and the difficulties of measuring low abundance proteins in plasma. In this session, technological approaches to biomarker detection and measurement were reviewed by a range of speakers from industry and academia. Label-free detection methods utilize molecular biophysical properties to monitor molecular presence, or molecular activity. The main advantage of label-free detection is the elimination of tags, dyes, specialized reagents, or engineered cells. This means that more direct information can be acquired about molecular events, minimising artefacts created by the use of labels. Molecular events can also be tracked in real-time, and native cells can be used for greater biological relevance. Peter O’Toole ( University of York ) reviewed how label-free microscopy, can be used to complement and enhance omic and biochemical data by providing minimal perturbations to cellular systems, as well as being quantitative and allowing prolonged live cell imaging. Ptychography (a computational method of microscopic imaging ) does not rely on the object absorbing radiation, so if visible light is used to illuminate the object then cells do not need to be stained, or labelled to create contrast. This allows the collection of cell morphological data during apoptosis and cell division, as well as the observation of the behaviour of cells at the individual level. Understanding the distribution, metabolism and accumulation of drugs in the body is a fundamental part of drug development. Multi-modal molecular mass spectrometry imaging (MSI) allows label-free analysis of endogenous and exogenous compounds ex-vivo by imaging the surface of tissue sections taken from fresh-frozen samples. Gregory Hamm explained how AZ is using MSI to study the abundance and spatial distribution of drugs and their metabolites within biological tissue samples and is also being used for model characterisation . Idiopathic pulmonary fibrosis (IPF) is a lung disease that results in scarring of the lungs and causes progressive and irreversible decline in lung function, with an average life expectancy of 4 years after diagnosis. Currently, only Nintedanib and Perfenidone have been approved for the treatment of IPF, despite numerous phase II and III trials in the past 25 years . This failure is due to: a lack of understanding of the disease mechanism; lack of predictability of preclinical animal models and; the lack of biomarkers to diagnose the disease and monitor response to drug therapy. Sally Price described how the development of biomarkers for IPF is a strategic focus for the Medicines Discovery Catapult , in efforts to develop novel anti-fibrotics . The MDC is working on developing new models such as organ on a chip and 3D organoid models, as well as applying a range of technologies to identify and develop biomarkers for fibrosis. Simon Cruwys ( TherapeutAix ) talked about how a fibrosis extracellular matrix biomarker panel in serum had been used to develop an ex vivo tissue model of IPF . Amyotrophic lateral sclerosis (ALS), also known as motor neurone disease (MND), or Lou Gehrig's disease, is a clinically heterogeneous neurodegenerative disease which causes the death of neurons controlling voluntary muscles. Most sufferers eventually lose the ability to walk, use their hands, speak, swallow, and breathe. Andrea Malaspina ( Queen Mary University of London ) discussed the search for biomarkers for ALS . The development of new therapies for ALS has been limited by a poor understanding of the molecular mechanisms underlying the disease , resulting in the failure of a large number of clinical studies. Proteomic experiments in individuals with a significant difference in prognosis and survival at different time points in disease progression have identified potential biomarkers , such as neurofilaments and proteins involved in the humoral response to axonal proteins and in axonal regeneration. Natural history studies , clinical trials and a biological repository are being used as sources of tissue for biomarker identification and qualification. With regard to Parkinson’s disease , depression, loss of sense of smell and constipation are clinical features that often prelude PD symptoms . Therefore, clinical observations are being used to identify biomarkers that track these symptoms in patients for use in preventive neurology. Although a cell’s proteome contains a lot of biologically and therapeutically useful information, proteome analysis has lagged behind genome and transcriptome analysis. This is due to the complexity of the proteomes of mammalian cells, tissues and body fluids and the wide dynamic range of protein concentrations that are encountered. The emergence of newer sophisticated mass spectrometry (MS) technology in the past decade, with higher resolution and faster scan rates, has enabled smoother and quicker identification of complex proteomes with shorter analysis periods. As a result, Ian Pike ( Proteome Sciences Plc ) explained, mass spectrometry-based proteomic platforms are being increasingly used for: therapeutic protein analysis; target identification and deconvolution; biomarker ID; analysis of target engagement; systems biology and; clinical studies. Ian presented a couple of case studies where MS had been applied to the study of pancreatic cancer and for plasma biomarker discovery in IPF . Finishing the Biomarker session, Chantal Bazzenet ( Evotec ) talked about the portfolio of assays that Evotec have developed to aid the development of therapies for Huntington’s disease . Patients suffer uncontrolled movements, emotional problems, and loss of cognition. This progressive brain disorder is caused by aggregation of Huntingtin (HTT) protein . The wild-type protein is monomeric, but the mutated protein is aggregated and accumulates in neurons, affecting normal neuronal functioning. Evotec have developed assays to measure total and mutated Huntingtin (HTT) protein in mouse and human tissues. Comment Discovering new drugs is challenging and that will continue to be the case for the foreseeable future. Central to the whole drug discovery process is establishing the biological and disease relevance of a particular drug target. However, it is sobering to consider that it took over two decades after the defective genes causing cystic fibrosis (CF) and Duchenne’s muscular dystrophy (DMD) were identified, before the first FDA approved drugs ( Ivacaftor for CF and Eteplirsen for DMD ) were available to treat subsets of patients carrying specific mutations. My personal view is that target validation should called target qualification, as the drug target is not truly validated until it is shown that therapies based on the drug target hypothesis actually work in clinical trials. As I mentioned in the introduction to this post, this is not the case for 60% of pre-clinically “validated” targets... In concert with the efforts to produce better drug targets and therapeutic hypotheses, it is clear that biomarkers for disease characterisation, early detection of disease, determining the trajectory of disease progression, patient selection for drug testing and, patient response to therapy, will be just as important for future clinical success as validated qualified drug targets. Interventions at earlier stages of the disease process are also required so that new drug therapies for common complex diseases are disease-modifying, or even curative, rather than just being symptomatic. What is clear, is that modern drug discovery requires a multi-disciplinary approach employing a number of different technologies, from omics, to CRISPR gene editing, plus everything in between. In turn, this means that ever more complex data sets are being generated that present challenges, not just in analysis, but in interpretation and knowledge extraction. AI will certainly have a key role to play in the data science arena, as well as making the DMTA cycle more efficient and effective. However, the hypothesis-free approach that typifies the omics era of drug discovery can mean that the wrong datasets are generated and analysed, so no matter how “smart” the algorithm used for data analysis, the outputs will not be therapeutically relevant. Therefore, the focus on rigour and quality being pursued by pharma companies such as AZ in everything, from understanding the disease biology, to better target validation qualification, can only be a good thing. What the impact on clinical success rates will be is uncertain at this stage, so it really is a case of watch this space…