Predicting Renal Cancer Recurrence: Defining Limitations of Existing Prognostic Models With Prospective Trial-Based Validation.

To validate currently used recurrence prediction models for renal cell carcinoma (RCC) by using prospective data from the ASSURE (ECOG-ACRIN E2805; Adjuvant Sorafenib or Sunitinib for Unfavorable Renal Carcinoma) adjuvant trial.

Eight RCC recurrence models (University of California at Los Angeles Integrated Staging System [UISS]; Stage, Size, Grade, and Necrosis [SSIGN]; Leibovich; Kattan; Memorial Sloan Kettering Cancer Center [MSKCC]; Yaycioglu; Karakiewicz; and Cindolo) were selected on the basis of their use in clinical practice and clinical trial designs. These models along with the TNM staging system were validated using 1,647 patients with resected localized high-grade or locally advanced disease (≥ pT1b grade 3 and 4/pTanyN1Mo) from the ASSURE cohort. The predictive performance of the model was quantified by assessing its discriminatory and calibration abilities.

Prospective validation of predictive and prognostic models for localized RCC showed a substantial decrease in each of the predictive abilities of the model compared with their original and externally validated discriminatory estimates. Among the models, the SSIGN score performed best (0.688; 95% CI, 0.686 to 0.689), and the UISS model performed worst (0.556; 95% CI, 0.555 to 0.557). Compared with the 2002 TNM staging system (C-index, 0.60), most models only marginally outperformed standard staging. Importantly, all models, including TNM, demonstrated statistically significant variability in their predictive ability over time and were most useful within the first 2 years after diagnosis.

In RCC, as in many other solid malignancies, clinicians rely on retrospective prediction tools to guide patient care and clinical trial selection and largely overestimate their predictive abilities. We used prospective collected adjuvant trial data to validate existing RCC prediction models and demonstrate a sharp decrease in the predictive ability of all models compared with their previous retrospective validations. Accordingly, we recommend prospective validation of any predictive model before implementing it into clinical practice and clinical trial design.

Journal of clinical oncology : official journal of the American Society of Clinical Oncology. 2019 Jun 19 [Epub ahead of print]

Andres F Correa, Opeyemi Jegede, Naomi B Haas, Keith T Flaherty, Michael R Pins, Edward M Messing, Judith Manola, Christopher G Wood, Christopher J Kane, Michael A S Jewett, Janice P Dutcher, Robert S DiPaola, Michael A Carducci, Robert G Uzzo

1 Fox Chase Cancer Center, Philadelphia, PA., 2 Dana-Farber Cancer Institute, Boston, MA., 3 University of Pennsylvania, Philadelphia, PA., 4 Massachusetts General Hospital, Boston, MA., Advocate Lutheran General Hospital, Park Ridge, IL., 6 University of Rochester, Rochester, NY., 7 MD Anderson Cancer Center, Houston, TX., 8 University of California-San Diego, La Jolla, CA., 9 University of Toronto, Toronto, ON, Canada., 10 Cancer Research Foundation, New York, NY., 11 University of Kentucky College of Medicine, Lexington, KY., 12 Johns Hopkins Hospital, Baltimore, MD.