[2508.19300] CellINR: Implicitly Overcoming Photo-induced Artifacts in 4D Live Fluorescence Microscopy

[2508.19300] CellINR: Implicitly Overcoming Photo-induced Artifacts in 4D Live Fluorescence Microscopy

arXiv - AI 4 min read Article

Summary

The paper presents CellINR, a novel framework designed to mitigate photo-induced artifacts in 4D live fluorescence microscopy, enhancing image quality and continuity.

Why It Matters

This research addresses a significant challenge in microscopy, where high-intensity illumination can compromise image quality. By introducing a new optimization approach, CellINR not only improves the accuracy of cellular structure reconstruction but also provides a dataset for future studies, thus advancing biological imaging techniques.

Key Takeaways

  • CellINR effectively reduces photo-induced artifacts in microscopy.
  • The framework utilizes implicit neural representation for enhanced image reconstruction.
  • A new paired 4D live cell imaging dataset is introduced for performance evaluation.
  • Experimental results show significant improvements over existing methods.
  • The code and dataset will be publicly available for further research.

Electrical Engineering and Systems Science > Image and Video Processing arXiv:2508.19300 (eess) This paper has been withdrawn by Zhao Cunmin [Submitted on 25 Aug 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:CellINR: Implicitly Overcoming Photo-induced Artifacts in 4D Live Fluorescence Microscopy Authors:Cunmin Zhao, Ziyuan Luo, Guoye Guan, Zelin Li, Yiming Ma, Zhongying Zhao, Renjie Wan View a PDF of the paper titled CellINR: Implicitly Overcoming Photo-induced Artifacts in 4D Live Fluorescence Microscopy, by Cunmin Zhao and 6 other authors No PDF available, click to view other formats Abstract:4D live fluorescence microscopy is often compromised by prolonged high intensity illumination which induces photobleaching and phototoxic effects that generate photo-induced artifacts and severely impair image continuity and detail recovery. To address this challenge, we propose the CellINR framework, a case-specific optimization approach based on implicit neural representation. The method employs blind convolution and structure amplification strategies to map 3D spatial coordinates into the high frequency domain, enabling precise modeling and high-accuracy reconstruction of cellular structures while effectively distinguishing true signals from artifacts. Experimental results demonstrate that CellINR significantly outperforms existing techniques in artifact removal and restoration of structural continuity, and for the first time, a paired 4D live cell imaging datase...

Related Articles

[2511.21428] From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in Industrial Settings
Machine Learning

[2511.21428] From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in Industrial Settings

Abstract page for arXiv paper 2511.21428: From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in ...

arXiv - AI · 4 min ·
[2511.16719] SAM 3: Segment Anything with Concepts
Machine Learning

[2511.16719] SAM 3: Segment Anything with Concepts

Abstract page for arXiv paper 2511.16719: SAM 3: Segment Anything with Concepts

arXiv - AI · 4 min ·
[2603.28594] Detection of Adversarial Attacks in Robotic Perception
Machine Learning

[2603.28594] Detection of Adversarial Attacks in Robotic Perception

Abstract page for arXiv paper 2603.28594: Detection of Adversarial Attacks in Robotic Perception

arXiv - AI · 3 min ·
[2603.28555] Domain-Invariant Prompt Learning for Vision-Language Models
Llms

[2603.28555] Domain-Invariant Prompt Learning for Vision-Language Models

Abstract page for arXiv paper 2603.28555: Domain-Invariant Prompt Learning for Vision-Language Models

arXiv - AI · 3 min ·
More in Computer Vision: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime