Big Data: a new therapeutic option? Part 4: Predict or cure?

2015, 08 December

| 5 min read|
Alexandre Templier
Analysing

Personalized medicine. Predict or cure: do we need to choose?

Following Part 3: Data as raw material

4) Personalized medicine: predict or cure – should we choose?

Even if it is often written that “personalized medicine is already a reality in some cases“, or that “personalized medicine is within reach“, it is justified to wonder if personalized medicine is really a matter of the near future. There are indeed a certain number of targeted therapies, mostly in oncology, which enable – for some types of cancers – an individualized treatment, according to the genetic and biological specificities of the patients’ tumors. There is a specific focus on the monoclonal antibody trastuzumab (Herceptin®), indicated in the cases of breast cancer with protein overexpression of the HER2/neu gene, or the RAS genes (KRAS and NRAS), whose status of mutation / non-mutation makes it possible to anticipate the response of colorectal tumors to anti-EGFR treatments. These rare examples are nothing but an evolution of the 1980s practise of hormone therapy for breast cancer, targeting cellular estrogen receptors (ER) or progesterone receptors (PR). In other words, these markers enable to significantly increase the chances for portion of patients to respond, based on different biological characteristics, and to averagely gain a few months in their survival. Patients without any of the known markers have a lower chance to give a therapeutic response, and shorter survival times (e.g. triple-negative breast cancer). Imanitib (Glivec®) is a very effective treatment for chronic myelogenous leukemia (as it competes with the tyrosine kinase activity that constitutes the chimeric protein BCR-ABL2 from the disease-specific t(9 ;22) -translocation), aiming at both the marker and the disaes target. Unfortunately, this is a relatively isolated case.

Other recent examples of biomarkers that predict therapeutic response (e.g. vemurafenib/BRAF V600E, crizotinib/ALK, tafinlar/mekinist/THxID BRAF …) are also individual markers. They too only cover a minor part of the indication in question, confer only a relative increase in efficiency compared to non-targeted treatment, and all in all, are not very numerous. Finally, biosignatures, of which Mammaprint® and OncotypeDx® are the main representatives, are nowadays essentially dedicated to diagnosis and prognosis, and minorly to targeting treatments. Their sensitivity is in a 80% range (80% of the relapses are effectively anticipated by the test), and their specificity is around 30- 40%. This means that it is possible to detect 80% of the relapses in advance, at the cost of a relatively high proportion of false positives. Remember that a systematically and blindly positive test would have a sensitivity of 100% – as it would detect all the positives (true positives) – and a specificity of 0%, as it would also detect all the negatives (false positives) as positives.

Even though biological measurements have never been so rich and sophisticated, and medical data so numerous, how is it that we are still not able to treat the right patient with the right molecule, at the right time, and at the right dose? Why is it that drug response rates are so low (on average ~25% regarding oncology, ~30% for Alzeihmer’s disease, ~48% for osteoporosis, ~57% for diabetes, ~60% for asthma, …)? – (Paving the Way for Personalized Medicine, FDA’s Role in a New Era of Medical Product Development, October 2013)

The answer, rather than being biotechnological, is probably methodological or even epistemological. To simplify, individual markers are identified by means of univariate (or low-dimension multivariate) analyses which very rarely (Cf. imatinib) make it possible to account for the complexity of living organisms, and biosignatures are based on multivariate mathematical models whose function is to globally predict a phenomenon, without taking an interest in the biases contained in the data from which they were constructed. However, the only way to effectively exploit clinical and biological data is to identify, characterize and exploit the biases they contain, as so many hypotheses to be validated in other data sets of the same nature. Bias is to be understood as the multiple patient profiles that are defined by combinations of characteristics that are specific to them and that locally show particularly high (or particularly low) therapeutic response rates. This amounts to artificially reproducing the learning process of the human expert, who, through experience, builds a set of rules that enable him to make choices according to the situations he encounters, and which he constantly questions as he learns from new experiences.

Few experts have the ability to reliably predict a phenomenon as complex as the response to a treatment from an experiment, however long it may be. Many more experts have consistently tried to do their best according to the situations they have encountered, and to humbly build on the results, without ever claiming to have looked at the question from all angles.

Why would personalized medicine be fundamentally different from medicine in its principles? Indeed, the matter is of personalized medicine, and not automatic medicine.

Let us make the probable assumption that biosignatures will soon make it possible to reliably predict – i.e. with high sensitivity and specificity – the evolution of diseases and the response to a treatment. Will doctors accept to make decisions without understanding the origin of the disease, or without having experienced its cause?

Nowadays, citizens are sometimes amazed and sometimes irritated by advertising banners offering them to buy products based on a machine-made prediction based on their browsing and purchasing behavior. Will these same citizens accept to rely on these machines when their health is at stake? As mentioned earlier, some women do not hesitate to proceed with breast or ovarian ablation because a commercial blood test report has predicted a higher than average risk of cancer. But is this really a sustainable phenomenon? How can we prove in the medium or long term that these women were right? How can we know that the prediction was true? In medicine as elsewhere, selling red lights is always easier than selling green lights. The FDA was not mistaken when it recently reminded the company 23andme of its regulatory obligations.
(http://www.livescience.com/41534-23andme-direct-to-consumer-genetic-test-shortcomings.html).

Data is undoubtedly a formidable therapeutic weapon. But it will only hit its target at the price of a profound paradigm shift consisting in detecting and exploiting the biases which exist in the data as many possible models to be validated, rather than simply making the assumption that these biases do not exist in order to build a global predictive model founded on biases without knowing their nature? It is time to search for effect sizes where they can be found – even if only in parts – in order to act in a targeted and effective way, rather than settling for undifferentiated predictions that, at best, enable standardized responses that are necessarily less effective.

Find out more in the next episode: Big Data: a new therapeutic option? Part 5: The future of pharma

SHARE THIS

latest articles