[2506.19591] Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications

[2506.19591] Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2506.19591: Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications

Computer Science > Computer Vision and Pattern Recognition arXiv:2506.19591 (cs) [Submitted on 24 Jun 2025 (v1), last revised 5 Apr 2026 (this version, v2)] Title:Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications Authors:Lujun Li, Yiqun Wang, Radu State View a PDF of the paper titled Vision Transformer-Based Time-Series Image Reconstruction for Cloud-Filling Applications, by Lujun Li and 2 other authors View PDF HTML (experimental) Abstract:Cloud cover in multispectral imagery (MSI) poses significant challenges for early season crop mapping, as it leads to missing or corrupted spectral information. Synthetic aperture radar (SAR) data, which is not affected by cloud interference, offers a complementary solution, but lack sufficient spectral detail for precise crop mapping. To address this, we propose a novel framework, Time-series MSI Image Reconstruction using Vision Transformer (ViT), to reconstruct MSI data in cloud-covered regions by leveraging the temporal coherence of MSI and the complementary information from SAR from the attention mechanism. Comprehensive experiments, using rigorous reconstruction evaluation metrics, demonstrate that Time-series ViT framework significantly outperforms baselines that use non-time-series MSI and SAR or time-series MSI without SAR, effectively enhancing MSI image reconstruction in cloud-covered regions. Comments: Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (c...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Machine Learning

I recently tested Gemma 4-31B locally and I was blown away with the intelligence/size ratio of this model. These papers show how they achieved such distillation capabilities.[R]

The secret sauce here is that the student model does not just try to guess the next token in a sentence, which is how most AI is trained....

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime