Enabling agent-first process redesign | MIT Technology Review
Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they interact with data, systems, ...
Text understanding and language tasks
Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they interact with data, systems, ...
I’ve been working on building an agentic AI workflow system for business use cases and one thing became very clear very quickly. This is ...
I am not a NLP guy, but afaik ACL is one of the premium venues of NLP. And given that the results were announced recently, my LinkedIn an...
This paper explores how large language models can better handle ambiguous requests by generating multiple interpretation-answer pairs, en...
This article explores the adaptation of large language models (LLMs) for low-resource dialects, focusing on the Québec French dialect usi...
This paper addresses stability hallucinations in LLM-based TTS models by enhancing attention mechanisms, proposing a new alignment metric...
ToolACE-MT introduces a non-autoregressive framework for generating high-quality multi-turn dialogues in agentic interactions, enhancing ...
The paper presents MLLM-CTBench, a benchmark for continual instruction tuning of multimodal large language models, addressing the need fo...
This article introduces the Haerae Evaluation Toolkit (HRET), a unified framework for evaluating the capabilities of Korean language mode...
The paper presents PlanetServe, a decentralized overlay for scalable and privacy-preserving serving of large language models (LLMs), addr...
The paper presents Retreever, a tree-based hierarchical retrieval method that enhances efficiency and transparency in information retriev...
This paper explores compressible dynamics in deep overparameterized low-rank learning, presenting methods to enhance training efficiency ...
The paper introduces Cross-Attention Token Pruning (CATP), a method designed to enhance the accuracy of multimodal models by effectively ...
This paper introduces Selective Abstraction (SA), a framework for improving the reliability of long-form text generated by LLMs by select...
The paper proposes a novel approach for enhancing domain-specific knowledge graphs (DKGs) by integrating general knowledge graphs (GKGs) ...
This study explores the use of fine-tuned large language models for automated depression screening in Nigerian Pidgin English, addressing...
The paper presents RLIE, a framework that integrates large language models (LLMs) with probabilistic rule learning to enhance rule genera...
The paper introduces VoiceAgentBench, a benchmark for evaluating voice assistants' capabilities in agentic tasks, highlighting their perf...
The paper presents SCAN, a novel approach for Semantic Document Layout Analysis that enhances Retrieval-Augmented Generation (RAG) system...
The paper presents SaVe-TAG, a novel framework that utilizes Large Language Models for semantic-aware interpolation in long-tailed text-a...
This article presents a statistical model for semantic chunking in natural language, revealing insights into the entropy of English and i...
The paper presents CoPE-VideoLM, a novel approach that utilizes codec primitives to enhance the efficiency of video language models, sign...
The paper introduces Krites, an asynchronous caching policy for large language models (LLMs) that enhances semantic caching efficiency wh...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime