Chrome now lets you turn AI prompts into repeatable ‘Skills’ | The Verge
Google is launching a new Chrome workflow feature that allows you to reuse your favorite Gemini commands across multiple web pages.
GPT, Claude, Gemini, and other LLMs
Google is launching a new Chrome workflow feature that allows you to reuse your favorite Gemini commands across multiple web pages.
Google is adding “Skills” to Chrome, letting users save and reuse AI prompts across websites. The feature builds on Gemini’s browser inte...
The feature lets users connect Google accounts like Gmail and Photos to get personalized answers.
Abstract page for arXiv paper 2603.03527: Logit-Level Uncertainty Quantification in Vision-Language Models for Histopathology Image Analysis
Abstract page for arXiv paper 2603.03524: Test-Time Meta-Adaptation with Self-Synthesis
Abstract page for arXiv paper 2603.03517: MMAI Gym for Science: Training Liquid Foundation Models for Drug Discovery
Abstract page for arXiv paper 2603.03332: Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations
Abstract page for arXiv paper 2603.03330: Certainty robustness: Evaluating LLM stability under self-challenging prompts
Abstract page for arXiv paper 2603.03329: AutoHarness: improving LLM agents by automatically synthesizing a code harness
Abstract page for arXiv paper 2603.03328: StructLens: A Structural Lens for Language Models via Maximum Spanning Trees
Abstract page for arXiv paper 2603.03326: Controllable and explainable personality sliders for LLMs at inference time
Abstract page for arXiv paper 2603.03325: IntPro: A Proxy Agent for Context-Aware Intent Understanding via Retrieval-conditioned Inference
Abstract page for arXiv paper 2603.03324: Controlling Chat Style in Language Models via Single-Direction Editing
Abstract page for arXiv paper 2603.03323: Discern Truth from Falsehood: Reducing Over-Refusal via Contrastive Refinement
Abstract page for arXiv paper 2603.03322: Can Large Language Models Derive New Knowledge? A Dynamic Benchmark for Biological Knowledge Di...
Abstract page for arXiv paper 2603.03321: DIALEVAL: Automated Type-Theoretic Evaluation of LLM Instruction Following
Abstract page for arXiv paper 2603.03389: Towards Improved Sentence Representations using Token Graphs
Abstract page for arXiv paper 2603.03320: From We to Me: Theory Informed Narrative Shift with Abductive Reasoning
Abstract page for arXiv paper 2603.03319: Automated Concept Discovery for LLM-as-a-Judge Preference Analysis
Abstract page for arXiv paper 2603.03378: AOI: Turning Failed Trajectories into Training Signals for Autonomous Cloud Diagnosis
Abstract page for arXiv paper 2603.03318: Quantum-Inspired Self-Attention in a Large Language Model
Abstract page for arXiv paper 2603.03314: Towards Self-Robust LLMs: Intrinsic Prompt Noise Resistance via CoIPO
Abstract page for arXiv paper 2603.03313: How does fine-tuning improve sensorimotor representations in large language models?
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime