[2602.01051] SwiftRepertoire: Few-Shot Immune-Signature Synthesis via Dynamic Kernel Codes
Summary
The paper presents SwiftRepertoire, a framework for synthesizing immune signatures using few-shot learning techniques, enabling efficient adaptation to new tasks in clinical settings.
Why It Matters
This research addresses significant challenges in immune monitoring, particularly the scarcity of labeled data and computational constraints. By introducing a method that allows for rapid adaptation of models with minimal data, it has the potential to enhance disease detection and monitoring in diverse clinical scenarios.
Key Takeaways
- SwiftRepertoire synthesizes task-specific parameterizations for immune signature analysis.
- The framework enables few-shot learning, requiring only a few examples for model adaptation.
- It preserves interpretability through motif-aware probes linked to predictive decisions.
- Addresses challenges of label sparsity and cohort heterogeneity in immune monitoring.
- Offers a practical solution for deploying machine learning in resource-constrained environments.
Computer Science > Machine Learning arXiv:2602.01051 (cs) [Submitted on 1 Feb 2026 (v1), last revised 14 Feb 2026 (this version, v2)] Title:SwiftRepertoire: Few-Shot Immune-Signature Synthesis via Dynamic Kernel Codes Authors:Rong Fu, Wenxin Zhang, Muge Qi, Yang Li, Yabin Jin, Jiekai Wu, Jiaxuan Lu, Chunlei Meng, Youjin Wang, Zeli Su, Juntao Gao, Li Bao, Qi Zhao, Wei Luo, Simon Fong View a PDF of the paper titled SwiftRepertoire: Few-Shot Immune-Signature Synthesis via Dynamic Kernel Codes, by Rong Fu and 14 other authors View PDF HTML (experimental) Abstract:Repertoire-level analysis of T cell receptors offers a biologically grounded signal for disease detection and immune monitoring, yet practical deployment is impeded by label sparsity, cohort heterogeneity, and the computational burden of adapting large encoders to new tasks. We introduce a framework that synthesizes compact task-specific parameterizations from a learned dictionary of prototypes conditioned on lightweight task descriptors derived from repertoire probes and pooled embedding statistics. This synthesis produces small adapter modules applied to a frozen pretrained backbone, enabling immediate adaptation to novel tasks with only a handful of support examples and without full model fine-tuning. The architecture preserves interpretability through motif-aware probes and a calibrated motif discovery pipeline that links predictive decisions to sequence-level signals. Together, these components yield a practical,...