Generative AI

Image, video, audio, and text generation

Top This Week

Accelerating science with AI and simulations
Machine Learning

Accelerating science with AI and simulations

MIT Professor Rafael Gómez-Bombarelli discusses the transformative potential of AI in scientific research, emphasizing its role in materi...

AI News - General · 10 min ·
[2603.10202] Hybrid Hidden Markov Model for Modeling Equity Excess Growth Rate Dynamics: A Discrete-State Approach with Jump-Diffusion
Machine Learning

[2603.10202] Hybrid Hidden Markov Model for Modeling Equity Excess Growth Rate Dynamics: A Discrete-State Approach with Jump-Diffusion

Abstract page for arXiv paper 2603.10202: Hybrid Hidden Markov Model for Modeling Equity Excess Growth Rate Dynamics: A Discrete-State Ap...

arXiv - Machine Learning · 4 min ·
[2602.00388] Safer by Diffusion, Broken by Context: Diffusion LLM's Safety Blessing and Its Failure Mode
Llms

[2602.00388] Safer by Diffusion, Broken by Context: Diffusion LLM's Safety Blessing and Its Failure Mode

Abstract page for arXiv paper 2602.00388: Safer by Diffusion, Broken by Context: Diffusion LLM's Safety Blessing and Its Failure Mode

arXiv - Machine Learning · 4 min ·

All Content

[2602.21704] Dynamic Multimodal Activation Steering for Hallucination Mitigation in Large Vision-Language Models
Llms

[2602.21704] Dynamic Multimodal Activation Steering for Hallucination Mitigation in Large Vision-Language Models

This paper presents Dynamic Multimodal Activation Steering, a novel approach to mitigate hallucinations in Large Vision-Language Models (...

arXiv - AI · 3 min ·
[2602.21441] Causal Decoding for Hallucination-Resistant Multimodal Large Language Models
Llms

[2602.21441] Causal Decoding for Hallucination-Resistant Multimodal Large Language Models

This article presents a novel causal decoding framework aimed at reducing object hallucination in multimodal large language models (MLLMs...

arXiv - Machine Learning · 3 min ·
[2602.21429] Provably Safe Generative Sampling with Constricting Barrier Functions
Machine Learning

[2602.21429] Provably Safe Generative Sampling with Constricting Barrier Functions

This paper presents a safety filtering framework for generative models, ensuring generated samples meet hard constraints while minimizing...

arXiv - Machine Learning · 4 min ·
[2602.21365] Towards Controllable Video Synthesis of Routine and Rare OR Events
Generative Ai

[2602.21365] Towards Controllable Video Synthesis of Routine and Rare OR Events

The paper presents a novel framework for synthesizing controlled video representations of routine and rare operating room events, address...

arXiv - Machine Learning · 4 min ·
[2602.21341] Scaling View Synthesis Transformers
Machine Learning

[2602.21341] Scaling View Synthesis Transformers

The paper explores scaling laws for view synthesis transformers, presenting a new architecture that outperforms previous models in Novel ...

arXiv - AI · 3 min ·
[2602.21226] IslamicLegalBench: Evaluating LLMs Knowledge and Reasoning of Islamic Law Across 1,200 Years of Islamic Pluralist Legal Traditions
Llms

[2602.21226] IslamicLegalBench: Evaluating LLMs Knowledge and Reasoning of Islamic Law Across 1,200 Years of Islamic Pluralist Legal Traditions

The paper introduces IslamicLegalBench, a benchmark for evaluating LLMs' reasoning on Islamic law, revealing significant limitations in c...

arXiv - AI · 4 min ·
[2602.21224] Make Every Draft Count: Hidden State based Speculative Decoding
Llms

[2602.21224] Make Every Draft Count: Hidden State based Speculative Decoding

The paper presents a novel approach to speculative decoding in large language models (LLMs), focusing on reusing discarded draft tokens t...

arXiv - Machine Learning · 4 min ·
[2602.21223] Measuring Pragmatic Influence in Large Language Model Instructions
Llms

[2602.21223] Measuring Pragmatic Influence in Large Language Model Instructions

This article explores how pragmatic framing in large language model instructions influences their behavior, introducing a framework to me...

arXiv - AI · 3 min ·
[2602.21221] Latent Context Compilation: Distilling Long Context into Compact Portable Memory
Llms

[2602.21221] Latent Context Compilation: Distilling Long Context into Compact Portable Memory

The paper introduces Latent Context Compilation, a novel framework that enhances long-context LLM deployment by distilling long contexts ...

arXiv - Machine Learning · 3 min ·
[2602.21219] Reasoning-Based Personalized Generation for Users with Sparse Data
Llms

[2602.21219] Reasoning-Based Personalized Generation for Users with Sparse Data

This article presents GraSPer, a novel framework designed to enhance personalized text generation for users with sparse data, addressing ...

arXiv - AI · 3 min ·
Top Artificial Intelligence Stats You Should Know About in 2026
Ai Startups

Top Artificial Intelligence Stats You Should Know About in 2026

This article presents key statistics and insights on the current state and future potential of artificial intelligence (AI) as we approac...

AI Events ·
[2602.21814] Prompt Architecture Determines Reasoning Quality: A Variable Isolation Study on the Car Wash Problem
Llms

[2602.21814] Prompt Architecture Determines Reasoning Quality: A Variable Isolation Study on the Car Wash Problem

This study investigates how different prompt architectures affect reasoning quality in large language models, specifically addressing the...

arXiv - AI · 3 min ·
[2602.21745] The ASIR Courage Model: A Phase-Dynamic Framework for Truth Transitions in Human and AI Systems
Machine Learning

[2602.21745] The ASIR Courage Model: A Phase-Dynamic Framework for Truth Transitions in Human and AI Systems

The ASIR Courage Model presents a phase-dynamic framework for understanding truth transitions in both human and AI systems, emphasizing t...

arXiv - AI · 4 min ·
[2602.21496] Beyond Refusal: Probing the Limits of Agentic Self-Correction for Semantic Sensitive Information
Llms

[2602.21496] Beyond Refusal: Probing the Limits of Agentic Self-Correction for Semantic Sensitive Information

The paper explores the limitations of self-correction in Large Language Models (LLMs) regarding semantic sensitive information, introduci...

arXiv - AI · 3 min ·
Llms

Showed to some friends, they said post on reddit. I said hmk.

An AI enthusiast shares a project overview on Reddit, seeking feedback on a front-end tool for memory that integrates with various AI mod...

Reddit - Artificial Intelligence · 1 min ·
Ai Agents

had a voice conversation with my physical ai system today

The author shares their experience of having a voice conversation with a physical AI system, highlighting its contextual understanding an...

Reddit - Artificial Intelligence · 1 min ·
Salesforce CEO Marc Benioff: This isn't our first SaaSpocalypse | TechCrunch
Ai Agents

Salesforce CEO Marc Benioff: This isn't our first SaaSpocalypse | TechCrunch

Salesforce CEO Marc Benioff reassures investors during the earnings call, emphasizing the company's resilience amid fears of an AI-driven...

TechCrunch - AI · 6 min ·
Anthropic acquires computer-use AI startup Vercept after Meta poached one of its founders | TechCrunch
Ai Agents

Anthropic acquires computer-use AI startup Vercept after Meta poached one of its founders | TechCrunch

Anthropic has acquired Vercept, an AI startup known for developing advanced agentic tools, following the poaching of one of its founders ...

TechCrunch - AI · 6 min ·
Riley Walz, the Jester of Silicon Valley, Is Joining OpenAI | WIRED
Llms

Riley Walz, the Jester of Silicon Valley, Is Joining OpenAI | WIRED

Riley Walz, known for his viral online projects, joins OpenAI to innovate human-AI interaction. His unique skills aim to enhance user exp...

Wired - AI · 5 min ·
Llms

[R] Made my own engine for Social-Simulations

The article discusses the creation of a custom engine for social simulations using LLMs, where agents interact in a controlled environmen...

Reddit - Machine Learning · 1 min ·
Previous Page 44 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime