How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch
Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.
GPT, Claude, Gemini, and other LLMs
Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.
Abstract page for arXiv paper 2603.05026: RepoLaunch: Automating Build&Test Pipeline of Code Repositories on ANY Language and ANY Platform
Abstract page for arXiv paper 2603.04964: Replaying pre-training data improves fine-tuning
Abstract page for arXiv paper 2603.04716: SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference
Abstract page for arXiv paper 2603.04480: AbAffinity: A Large Language Model for Predicting Antibody Binding Affinity against SARS-CoV-2
Abstract page for arXiv paper 2603.04466: Act-Observe-Rewrite: Multimodal Coding Agents as In-Context Policy Learners for Robot Manipulation
Abstract page for arXiv paper 2603.05232: SlideSparse: Fast and Flexible (2N-2):2N Structured Sparsity
Abstract page for arXiv paper 2603.04972: Functionality-Oriented LLM Merging on the Fisher--Rao Manifold
Abstract page for arXiv paper 2603.04956: WaterSIC: information-theoretically (near) optimal linear layer quantization
Abstract page for arXiv paper 2603.04948: $\nabla$-Reasoner: LLM Reasoning via Test-Time Gradient Descent in Latent Space
Abstract page for arXiv paper 2603.04898: U-Parking: Distributed UWB-Assisted Autonomous Parking System with Robust Localization and Inte...
Abstract page for arXiv paper 2603.04851: Why Is RLHF Alignment Shallow? A Gradient Analysis
Abstract page for arXiv paper 2603.04692: Engineering Regression Without Real-Data Training: Domain Adaptation for Tabular Foundation Mod...
Abstract page for arXiv paper 2603.04606: PDE foundation model-accelerated inverse estimation of system parameters in inertial confinemen...
Abstract page for arXiv paper 2603.04545: An LLM-Guided Query-Aware Inference System for GNN Models on Large Knowledge Graphs
Abstract page for arXiv paper 2603.04478: Standing on the Shoulders of Giants: Rethinking EEG Foundation Model Pretraining via Multi-Teac...
Abstract page for arXiv paper 2602.07075: LatentChem: From Textual CoT to Latent Thinking in Chemical Reasoning
Abstract page for arXiv paper 2601.23236: YuriiFormer: A Suite of Nesterov-Accelerated Transformers
Abstract page for arXiv paper 2601.21149: Mobility-Embedded POIs: Learning What A Place Is and How It Is Used from Human Movement
Abstract page for arXiv paper 2601.16333: Where is the multimodal goal post? On the Ability of Foundation Models to Recognize Contextuall...
Abstract page for arXiv paper 2601.14327: Yuan3.0 Ultra: A Trillion-Parameter Enterprise-Oriented MoE LLM
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime