Generative AI

Image, video, audio, and text generation

Top This Week

Accelerating science with AI and simulations
Machine Learning

Accelerating science with AI and simulations

MIT Professor Rafael Gómez-Bombarelli discusses the transformative potential of AI in scientific research, emphasizing its role in materi...

AI News - General · 10 min ·
[2510.08005] Past, Present, and Future of Bug Tracking in the Generative AI Era
Generative Ai

[2510.08005] Past, Present, and Future of Bug Tracking in the Generative AI Era

Abstract page for arXiv paper 2510.08005: Past, Present, and Future of Bug Tracking in the Generative AI Era

arXiv - AI · 4 min ·
[2509.05841] Generative AI on Wall Street -- Opportunities and Risk Controls
Generative Ai

[2509.05841] Generative AI on Wall Street -- Opportunities and Risk Controls

Abstract page for arXiv paper 2509.05841: Generative AI on Wall Street -- Opportunities and Risk Controls

arXiv - AI · 3 min ·

All Content

[2510.05077] Slm-mux: Orchestrating small language models for reasoning
Llms

[2510.05077] Slm-mux: Orchestrating small language models for reasoning

The paper presents SLM-MUX, a novel architecture for orchestrating small language models (SLMs) to improve reasoning accuracy, achieving ...

arXiv - AI · 4 min ·
[2510.03255] SciTS: Scientific Time Series Understanding and Generation with LLMs
Llms

[2510.03255] SciTS: Scientific Time Series Understanding and Generation with LLMs

The paper introduces SciTS, a benchmark for understanding and generating scientific time series data using large language models (LLMs), ...

arXiv - Machine Learning · 4 min ·
[2509.25184] Incentive-Aligned Multi-Source LLM Summaries
Llms

[2509.25184] Incentive-Aligned Multi-Source LLM Summaries

The paper presents an innovative framework called Truthful Text Summarization (TTS) aimed at enhancing the factual accuracy of multi-sour...

arXiv - AI · 3 min ·
[2509.21500] Chasing the Tail: Effective Rubric-based Reward Modeling for Large Language Model Post-Training
Llms

[2509.21500] Chasing the Tail: Effective Rubric-based Reward Modeling for Large Language Model Post-Training

This article presents a novel approach to reward modeling in large language models (LLMs) using rubric-based methods to mitigate reward o...

arXiv - Machine Learning · 4 min ·
[2509.18880] Diversity Boosts AI-Generated Text Detection
Llms

[2509.18880] Diversity Boosts AI-Generated Text Detection

The paper presents DivEye, a novel framework for detecting AI-generated text by analyzing unpredictability in text structure and vocabula...

arXiv - Machine Learning · 4 min ·
[2509.14537] ClearFairy: Capturing Creative Workflows through Decision Structuring, In-Situ Questioning, and Rationale Inference
Machine Learning

[2509.14537] ClearFairy: Capturing Creative Workflows through Decision Structuring, In-Situ Questioning, and Rationale Inference

The paper introduces ClearFairy, an AI assistant designed to enhance decision-making in creative workflows by structuring reasoning and i...

arXiv - AI · 3 min ·
[2508.19982] Diffusion Language Models Know the Answer Before Decoding
Llms

[2508.19982] Diffusion Language Models Know the Answer Before Decoding

The paper discusses Diffusion Language Models (DLMs) and introduces a new decoding method called Prophet, which allows for faster inferen...

arXiv - AI · 4 min ·
[2507.08017] Mechanistic Indicators of Understanding in Large Language Models
Llms

[2507.08017] Mechanistic Indicators of Understanding in Large Language Models

This paper explores mechanistic indicators of understanding in large language models (LLMs), proposing a tiered framework to assess their...

arXiv - AI · 4 min ·
[2506.09886] Probabilistic distances-based hallucination detection in LLMs with RAG
Llms

[2506.09886] Probabilistic distances-based hallucination detection in LLMs with RAG

This paper presents a novel method for detecting hallucinations in large language models (LLMs) using probabilistic distances in retrieva...

arXiv - AI · 3 min ·
[2506.07452] When Style Breaks Safety: Defending LLMs Against Superficial Style Alignment
Llms

[2506.07452] When Style Breaks Safety: Defending LLMs Against Superficial Style Alignment

This paper explores the vulnerabilities of large language models (LLMs) to superficial style alignment, proposing a defense mechanism cal...

arXiv - Machine Learning · 4 min ·
[2506.05154] Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement
Llms

[2506.05154] Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement

The paper presents Knowledgeable-R1, a reinforcement-learning framework designed to enhance retrieval-augmented generation (RAG) by mitig...

arXiv - AI · 4 min ·
[2503.06692] InftyThink: Breaking the Length Limits of Long-Context Reasoning in Large Language Models
Llms

[2503.06692] InftyThink: Breaking the Length Limits of Long-Context Reasoning in Large Language Models

InftyThink presents a novel approach to long-context reasoning in large language models, addressing computational limits and enhancing pe...

arXiv - AI · 4 min ·
[2502.11684] MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task
Llms

[2502.11684] MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task

The paper introduces MathFimer, a framework designed to enhance mathematical reasoning in large language models by expanding reasoning st...

arXiv - AI · 4 min ·
[2310.17167] Improving Denoising Diffusion Models via Simultaneous Estimation of Image and Noise
Machine Learning

[2310.17167] Improving Denoising Diffusion Models via Simultaneous Estimation of Image and Noise

This paper presents advancements in denoising diffusion models, focusing on simultaneous estimation of image and noise to enhance image g...

arXiv - Machine Learning · 4 min ·
[2510.19139] A Multi-faceted Analysis of Cognitive Abilities: Evaluating Prompt Methods with Large Language Models on the CONSORT Checklist
Llms

[2510.19139] A Multi-faceted Analysis of Cognitive Abilities: Evaluating Prompt Methods with Large Language Models on the CONSORT Checklist

This paper evaluates the cognitive abilities of large language models (LLMs) in assessing clinical trial reporting according to CONSORT s...

arXiv - AI · 4 min ·
[2505.18502] Knowledge Fusion of Large Language Models Via Modular SkillPacks
Llms

[2505.18502] Knowledge Fusion of Large Language Models Via Modular SkillPacks

The paper presents GraftLLM, a novel method for knowledge fusion in large language models using modular SkillPacks, enhancing cross-capab...

arXiv - Machine Learning · 4 min ·
[2602.22197] Off-The-Shelf Image-to-Image Models Are All You Need To Defeat Image Protection Schemes
Machine Learning

[2602.22197] Off-The-Shelf Image-to-Image Models Are All You Need To Defeat Image Protection Schemes

This paper demonstrates that off-the-shelf image-to-image models can effectively defeat various image protection schemes, highlighting a ...

arXiv - AI · 4 min ·
[2602.22145] When AI Writes, Whose Voice Remains? Quantifying Cultural Marker Erasure Across World English Varieties in Large Language Models
Llms

[2602.22145] When AI Writes, Whose Voice Remains? Quantifying Cultural Marker Erasure Across World English Varieties in Large Language Models

This article explores the phenomenon of 'Cultural Ghosting' in large language models (LLMs), highlighting the systematic erasure of cultu...

arXiv - AI · 4 min ·
[2602.22144] NoLan: Mitigating Object Hallucinations in Large Vision-Language Models via Dynamic Suppression of Language Priors
Llms

[2602.22144] NoLan: Mitigating Object Hallucinations in Large Vision-Language Models via Dynamic Suppression of Language Priors

The paper presents NoLan, a framework aimed at reducing object hallucinations in Large Vision-Language Models (LVLMs) by dynamically supp...

arXiv - AI · 4 min ·
[2602.21939] Hidden Topics: Measuring Sensitive AI Beliefs with List Experiments
Llms

[2602.21939] Hidden Topics: Measuring Sensitive AI Beliefs with List Experiments

This paper explores how list experiments can be used to uncover hidden beliefs in large language models (LLMs), revealing concerning appr...

arXiv - AI · 3 min ·
Previous Page 41 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime