clinical trials 2 - valley of death and drug discovery Flashcards
valley of death + issues causing it
translation from basic science to human studies
basic science (academia) –> translational science (education - valley of death) –> clinical science (industry)
issues:
- reproducibility
- clinical relevance
- structural
- funding
- time
- follow through
- risk
translational research
valley of death
gap between academia and industry
gap in translational research
research has to be done to bridge this gap
4 ways drug discovery process can be initiated
different ways discovery can start:
- new insight into disease process
- try and design a product to stop/reverse the effects (target)
- test/screening to find possible beneficial effects against disease
- existing treatments have an unexpected effect on a new disease - if they are already proved to be safe they can more easily be tested elsewhere - repurposing
thousands of compounds may be potential candidates for development at this stage
after early testing, only a small number of compounds will look promising and call for further study
method of drug screening (5 steps)
target validation:
- genetic, cellular, and in vivo experimental models
compound screening:
- HTS (high throughput screening) and selective library screens
- reiterative directed compound synthesis for improvement of compound properties
secondary assays:
- study compund interaction with target
- in vitro and ex vivo (tissue taken from organism and studied) secondary assays (mechanistic)
- selectivity and liability assays
in vivo analysis:
- compound pharmacology
- disease efficacy models
- early safety and toxicity studies
human:
- preclinical safety and toxicity package
improvement of screening drugs over time
high throughput screening (HTS)
100,000s of compounds tested quickly
1994 - 96 wells per tray
2000 - 1536 wells per tray –> therefore 200,000 compounds tested a day
drug discovery - development considerations (6)
once compound is identified, need to think about:
- how it is absorbed, distributed, metabolised, excreted
- potential benefits and mechanisms
- dosage and toxicity –> legal requirement to increase dosage given to animals until 50% of them die - lethal dosage is then known
- drug administration
- interaction with other drugs/treatments
- comparison to existing drugs
in vitro and in vivo testing
before testing on people
in vitro = cells in a test tube are effected by treatment
in vivo = small animals like mice, or non human primates in brain disease studies (in later stage testing)
strict guidelines for pre-clinical laboratories –> good laboratory practices (GLP) - standardised approaches and method
Good Laboratory Practice (GLP)
minimum basic requirements to make sure all labs are run the same:
study conduct
- personnel – training of staff
facilities
- equipment – safe and rigorously checked
- written protocols for all experiments
- standard operating procedures – minimising experimenter error
- clearly writing study reports
- quality assurance oversite for each program of work - essentially ethical approval
preclinical studies - what they find
not very large
provide detailed info on dosing and toxicity levels
after this, findings are reviewed and it is decided whether to proceed to clinical trials
why clinical studies fail - example from stroke research
accurate and repeatable strokes can be caused in rodents
literature review showed:
- 800 drugs tested on animal models
- 500 of these reduced effect of stroke
- 100 went to clinical trials
- 1 became a treatment
another study found:
randomised and blinded experiments had less favourable results
of 100 studies looked at, only 36% randomised and 11% blinded –> routine in clinical trials
why clinical studies fail - Alzheimer’s disease example (5)
billions of £ invested but no disease modifying Alzheimer’s drug has been developed
why trials have failed:
- wrong target - focus on beta amyloid plaques but it could be something else like blood supply
- interventions too late - damage already done
- early biomarkers needed
- trials are often 5 years, could need to be longer - but this increases cost
- not easy problem to solve
large cohort human studies may help in early biomarker detection
biggest predictors of Alzheimer’s
- age
- deafness
- lifestyle
large cohort study - Alzheimer’s Disease Neuroimaging Initiative (ADNI) in USA
7 tests + budget
one of the largest cohort studies
budget so far = $218mil
multimodal data from elderly controls and AD patients
data from smaller labs given to large clinical centres to get the large cohort
tests:
- history of health and education
- neuropsychological tests
- genetic testing for risk factors
- lumbar puncture - cerebrospinal fluid measurement
- MRI and fMRI scans
- PET for glucose consumption, Tau and Beta amyloid
- post mortem histology
desired outcome of ADNI:
- complete time line of disease progression and potential to detect early biomarkers and critical time points for intervention
currently 3393 papers published from ADNI studies - still growing
late onset Alzheimer’s disease (LOAD)
80% of people with Alzheimer’s get LOAD
takes advantage of vast amount of ADNI data - free accessible database which analyses can be run on
looks at 7000 brain scans and all data on those patients
takes data from ADNI which is believed to relate to Alzheimer’s
findings:
- produces a measured (NOT theoretical) time line of disease
- cerebrovascular system goes first
- therefore this might be an early biomarker so drugs could be used earlier
future of clinical trials
AI can be used to design clinical trials - big gamble on AI at the moment globally
all about increasing the efficiency of all steps in the process - even a small increase in each step will save millions in money and time
more target approaches to select more targeted patient populations - genome studies
lowering dropout rates - keep statistical power
making sure patients are sticking to the trial perfectly
electronic medical record used