[2508.11060] Counterfactual Survival Q-learning via Buckley-James Boosting, with Applications to ACTG 175 and CALGB 8923
Summary
This article presents a novel Buckley-James Boost Q-learning framework designed to enhance treatment decision-making in clinical trials with incomplete follow-up data.
Why It Matters
The proposed methodology addresses critical challenges in estimating optimal treatment regimes from censored survival data, which is vital for personalized medicine. By avoiding common pitfalls of traditional models, it enhances the accuracy and stability of treatment decisions in complex clinical scenarios.
Key Takeaways
- Introduces Buckley-James Boost Q-learning for dynamic treatment regimes.
- Avoids proportional hazards assumption, improving model robustness.
- Demonstrates enhanced treatment decision accuracy in clinical trials.
- Combines accelerated failure time modeling with iterative boosting.
- Provides a flexible alternative to Cox-based Q-learning methods.
Statistics > Machine Learning arXiv:2508.11060 (stat) [Submitted on 14 Aug 2025 (v1), last revised 17 Feb 2026 (this version, v2)] Title:Counterfactual Survival Q-learning via Buckley-James Boosting, with Applications to ACTG 175 and CALGB 8923 Authors:Jeongjin Lee, Jong-Min Kim View a PDF of the paper titled Counterfactual Survival Q-learning via Buckley-James Boosting, with Applications to ACTG 175 and CALGB 8923, by Jeongjin Lee and 1 other authors View PDF HTML (experimental) Abstract:We propose a Buckley James (BJ) Boost Q learning framework for estimating optimal dynamic treatment regimes from right censored survival outcomes in longitudinal randomized clinical trials, motivated by the clinical need to support patient specific treatment decisions when follow up is incomplete and covariate effects may be nonlinear. The method combines accelerated failure time modeling with iterative boosting using flexible base learners, including componentwise least squares and regression trees, within a counterfactual Q learning framework. By modeling conditional survival time directly, BJ Boost Q learning avoids the proportional hazards assumption, yields clinically interpretable time scale contrasts, and enables estimation of stage specific Q functions and individualized decision rules under standard potential outcomes assumptions. In contrast to Cox based Q learning, which relies on hazard modeling and can be sensitive to nonproportional hazards and model misspecification, our ap...