Predicting a Cancer’s Behavior—The Cloudy Crystal Ball

Predicting a Cancer’s Behavior—The Cloudy Crystal Ball

L. Michael Glode, MD, FACP, FASCO

Feb 03, 2015

In the past decade, a number of labs and companies have developed techniques to further define how a patient with prostate cancer, or suspected cancer, is likely to do. There are at least a dozen tests that are in various stages of commercialization, and it would be difficult to do justice to all of them here. However, I will at least describe the general approach and the potential utility and hazards of some of them.

First the mechanics:

If you take a needle biopsy or a prostate that has been removed, you can microdissect the cancer out of the paraffin and extract RNA from the specimen. If you quantitate the amount of RNA for each gene, you can get some idea of whether certain genes are being overexpressed or underexpressed in that specimen.

Now, let’s suppose you do that for 500 patients who have Gleason 7 cancer, a PSA between 3 and 10, and no lymph node involvement, negative margins, no extracapsular extension, etc. (We already have some idea of how this patient with intermediate risk will do using nomograms such as the CAPRA-S score (there is an app for this calculator; or Dr. Kattan's at MSKCC). If we look at all 500 patients who have a similar prognosis based on the nomograms, we can then ask whether a gene expression profile could further separate a patient with similar a priori risk into  “high-intermediate” or  “low-intermediate” risk. (The same could be done for patients with low or high risk based on nomogram prediction, of course.)

Now let us look at 50 patients who do much worse than expected and another 50 who do much better than expected. In these two groups, among the 30,000 genes tested, suppose there are 500 that are overexpressed by 3-fold and another 500 that are underexpressed by 3-fold in the  “bad outcome” group, with similar findings for the “good outcome” group.

Many of the genes will be telling us the same thing—for example, that a certain characteristic like  “rapid proliferation” is associated with a bad outcome, and overexpression of those genes predicts a bad outcome. But, you don't need all of the genes in that characteristic to tell you that. So, you ask the computer to recalculate a prediction by leaving one of the genes out. If it makes no difference, then the remaining genes stay in until you reach a smaller subset of genes that are either over- or underexpressed, but suffice to make the prediction. You put those genes (from the  “learning set”) on a chip that can quantitate expression and find another 500 patients from a different institution to see if the chip can accurately predict what happens to those 500 patients (the  “validation set”) for whom you don't know the outcome. If the chip performs well, you have a new test that could be a valuable clinical tool.

Examples of this sort of work that are now commercially available include the Prolaris™ and Decipher™ tests. As you might expect, the time/effort that has gone into making these tests has been considerable, and they are not inexpensive.

Another such test, ConfirmMdx, evaluates gene hypermethylation (a way that genes are turned off) in prostate biopsy specimens and can help predict whether there was cancer NEAR a negative biopsy (a false negative biopsy result). Because the test might help avoid repeat biopsies, this is an example of the complexity in assessing cost. Saving cost/risk of further biopsies, if validated prospectively, could show that the test saves money. Similarly, using these kinds of tests to avoid overtreating a patient with  “intermediate risk” who might actually have low risk based on a molecular analysis could save costs and morbidity.

What a purist would want to know is whether making a recommendation to an individual patient based on the outcome of one of these tests has been validated. In other words, how often when I tell a patient he has nothing to worry about in spite of having a Gleason 7 cancer on a biopsy, am I correct?

Clearly no test is perfect, and patients themselves may differ in their risk tolerance. One patient who has a favorable genetic profile (say a test that says only 7% risk of metastases at five years) might be inclined to simply watch and do nothing more, while another would find that risk intolerable and opt for further treatment.

Further, few doctors or patients are using the FREE analysis of overall health risk called the Charlson tool to put such risk into context (we wonder why not—no marketing is likely one answer). In this wonderful world of blogs and mobile devices, I am hoping someone will create an app for the Charlson tool!

Meanwhile, the molecular testing of prostate cancer is a step forward, and hopefully, how to use it will become clearer over time. We will, however, always have the challenge of heterogeneity and plasticity of the prostate genome to deal with. Such is the legacy of our good friend, Darwin.

This post originally was published on prost8blog, a blog to help patients and their families understand various aspects of prostate cancer, and is reprinted with permission of Dr. Glodé.

Disclaimer: 

The ideas and opinions expressed on the ASCO Connection Blogs do not necessarily reflect those of ASCO. None of the information posted on ASCOconnection.org is intended as medical, legal, or business advice, or advice about reimbursement for health care services. The mention of any product, service, company, therapy or physician practice on ASCOconnection.org does not constitute an endorsement of any kind by ASCO. ASCO assumes no responsibility for any injury or damage to persons or property arising out of or related to any use of the material contained in, posted on, or linked to this site, or any errors or omissions.

Advertisement
Back to Top