Generative AI

Image, video, audio, and text generation

Top This Week

Report says Minnesota workers face highest generative AI exposure in the Midwest
Generative Ai

Report says Minnesota workers face highest generative AI exposure in the Midwest

A report from North Star Policy Action says Minnesota workers have the highest generative AI exposure in the Midwest and the 10th-highest...

AI Tools & Products · 6 min ·
Navigating Recent Developments in Generative AI and Trade Secret Protection
Generative Ai

Navigating Recent Developments in Generative AI and Trade Secret Protection

AI Tools & Products · 13 min ·
[2601.03127] Unified Thinker: A General Reasoning Modular Core for Image Generation
Machine Learning

[2601.03127] Unified Thinker: A General Reasoning Modular Core for Image Generation

Abstract page for arXiv paper 2601.03127: Unified Thinker: A General Reasoning Modular Core for Image Generation

arXiv - AI · 4 min ·

All Content

[2602.15843] The Perplexity Paradox: Why Code Compresses Better Than Math in LLM Prompts
Llms

[2602.15843] The Perplexity Paradox: Why Code Compresses Better Than Math in LLM Prompts

This article explores the 'perplexity paradox' in large language models (LLMs), demonstrating that code compresses better than mathematic...

arXiv - AI · 3 min ·
[2602.16653] Agent Skill Framework: Perspectives on the Potential of Small Language Models in Industrial Environments
Llms

[2602.16653] Agent Skill Framework: Perspectives on the Potential of Small Language Models in Industrial Environments

The article explores the Agent Skill Framework, assessing its effectiveness in enhancing small language models (SLMs) for industrial appl...

arXiv - AI · 4 min ·
[2602.16578] Creating a digital poet
Llms

[2602.16578] Creating a digital poet

This paper explores the creation of a digital poet using a large language model, detailing a workshop where the model developed a unique ...

arXiv - AI · 3 min ·
[2602.16512] Framework of Thoughts: A Foundation Framework for Dynamic and Optimized Reasoning based on Chains, Trees, and Graphs
Llms

[2602.16512] Framework of Thoughts: A Foundation Framework for Dynamic and Optimized Reasoning based on Chains, Trees, and Graphs

The article presents the Framework of Thoughts (FoT), a new foundation framework designed to enhance the reasoning capabilities of large ...

arXiv - AI · 3 min ·
[2602.16173] Learning Personalized Agents from Human Feedback
Machine Learning

[2602.16173] Learning Personalized Agents from Human Feedback

The paper presents a framework, Personalized Agents from Human Feedback (PAHF), which enables AI agents to adapt to individual user prefe...

arXiv - Machine Learning · 4 min ·
[2602.16066] Improving Interactive In-Context Learning from Natural Language Feedback
Llms

[2602.16066] Improving Interactive In-Context Learning from Natural Language Feedback

This paper presents a novel framework for improving interactive in-context learning in large language models by utilizing natural languag...

arXiv - AI · 4 min ·
[2602.16039] How Uncertain Is the Grade? A Benchmark of Uncertainty Metrics for LLM-Based Automatic Assessment
Llms

[2602.16039] How Uncertain Is the Grade? A Benchmark of Uncertainty Metrics for LLM-Based Automatic Assessment

This article benchmarks various uncertainty metrics for LLM-based automatic assessment, highlighting the challenges of output uncertainty...

arXiv - AI · 4 min ·
[2602.16198] Training-Free Adaptation of Diffusion Models via Doob's $h$-Transform
Machine Learning

[2602.16198] Training-Free Adaptation of Diffusion Models via Doob's $h$-Transform

This paper presents a novel training-free adaptation method for diffusion models, leveraging Doob's $h$-transform to enhance sampling eff...

arXiv - Machine Learning · 4 min ·
[2602.16169] Discrete Stochastic Localization for Non-autoregressive Generation
Llms

[2602.16169] Discrete Stochastic Localization for Non-autoregressive Generation

The paper presents Discrete Stochastic Localization (DSL), a method that enhances non-autoregressive generation by improving the efficien...

arXiv - Machine Learning · 3 min ·
[2602.16092] Why Any-Order Autoregressive Models Need Two-Stream Attention: A Structural-Semantic Tradeoff
Machine Learning

[2602.16092] Why Any-Order Autoregressive Models Need Two-Stream Attention: A Structural-Semantic Tradeoff

The paper explores the necessity of two-stream attention in any-order autoregressive models, highlighting a structural-semantic tradeoff ...

arXiv - Machine Learning · 4 min ·
[2602.16065] Can Generative Artificial Intelligence Survive Data Contamination? Theoretical Guarantees under Contaminated Recursive Training
Llms

[2602.16065] Can Generative Artificial Intelligence Survive Data Contamination? Theoretical Guarantees under Contaminated Recursive Training

This paper explores the resilience of generative AI models against data contamination during recursive training, providing theoretical gu...

arXiv - AI · 4 min ·
[2602.16053] Multi-Objective Alignment of Language Models for Personalized Psychotherapy
Llms

[2602.16053] Multi-Objective Alignment of Language Models for Personalized Psychotherapy

This article discusses a multi-objective alignment framework for language models aimed at enhancing personalized psychotherapy, balancing...

arXiv - Machine Learning · 3 min ·
[2602.16052] MoE-Spec: Expert Budgeting for Efficient Speculative Decoding
Llms

[2602.16052] MoE-Spec: Expert Budgeting for Efficient Speculative Decoding

The paper introduces MoE-Spec, a method for improving efficiency in speculative decoding of Large Language Models (LLMs) by optimizing ex...

arXiv - Machine Learning · 3 min ·
[2602.16020] MolCrystalFlow: Molecular Crystal Structure Prediction via Flow Matching
Machine Learning

[2602.16020] MolCrystalFlow: Molecular Crystal Structure Prediction via Flow Matching

MolCrystalFlow introduces a novel flow-based generative model for predicting molecular crystal structures, addressing challenges in compu...

arXiv - Machine Learning · 4 min ·
[2602.15997] Anatomy of Capability Emergence: Scale-Invariant Representation Collapse and Top-Down Reorganization in Neural Networks
Llms

[2602.15997] Anatomy of Capability Emergence: Scale-Invariant Representation Collapse and Top-Down Reorganization in Neural Networks

This article explores the mechanisms of capability emergence in neural networks, revealing a scale-invariant representation collapse and ...

arXiv - Machine Learning · 4 min ·
[2602.15984] Verifier-Constrained Flow Expansion for Discovery Beyond the Data
Machine Learning

[2602.15984] Verifier-Constrained Flow Expansion for Discovery Beyond the Data

This paper presents a method called Verifier-Constrained Flow Expansion (FE) to enhance flow models for scientific discovery by expanding...

arXiv - Machine Learning · 4 min ·
[2602.15971] B-DENSE: Branching For Dense Ensemble Network Learning
Machine Learning

[2602.15971] B-DENSE: Branching For Dense Ensemble Network Learning

The paper presents B-DENSE, a novel framework for improving dense ensemble network learning by leveraging multi-branch trajectory alignme...

arXiv - AI · 3 min ·
[2602.15842] Memes-as-Replies: Can Models Select Humorous Manga Panel Responses?
Machine Learning

[2602.15842] Memes-as-Replies: Can Models Select Humorous Manga Panel Responses?

This article explores the Meme Reply Selection task, analyzing how large language models (LLMs) can select humorous manga panel responses...

arXiv - Machine Learning · 3 min ·
Personalization features can make LLMs more agreeable
Llms

Personalization features can make LLMs more agreeable

This article discusses how personalization features in large language models (LLMs) can lead to sycophancy, where models overly agree wit...

AI News - General · 9 min ·
Is an AI price war about to begin?
Ai Startups

Is an AI price war about to begin?

The article explores the potential onset of a price war in the AI industry, driven by competition among major players and advancements in...

AI News - General · 1 min ·
Previous Page 80 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime