[2602.13351] A Formal Framework for the Explanation of Finite Automata Decisions

[2602.13351] A Formal Framework for the Explanation of Finite Automata Decisions

arXiv - AI 4 min read Article

Summary

This paper presents a formal framework for explaining the decisions made by finite automata (FA), focusing on minimal input character sets that lead to specific outcomes.

Why It Matters

Understanding how finite automata make decisions is crucial in various fields such as computer science, linguistics, and artificial intelligence. This framework provides a systematic approach to interpret FA behavior, which can enhance model transparency and reliability in automated systems.

Key Takeaways

  • The paper proposes a method to explain FA decisions based on minimal input character sets.
  • It addresses the complexity of FA behavior, making it easier to understand their outputs.
  • The approach allows for unbiased explanations of input features affecting outcomes.
  • Experiments indicate that the proposed method scales well with challenging problems.
  • This framework can improve the interpretability of AI systems that utilize finite automata.

Computer Science > Formal Languages and Automata Theory arXiv:2602.13351 (cs) [Submitted on 12 Feb 2026] Title:A Formal Framework for the Explanation of Finite Automata Decisions Authors:Jaime Cuartas Granada, Alexey Ignatiev, Peter J. Stuckey View a PDF of the paper titled A Formal Framework for the Explanation of Finite Automata Decisions, by Jaime Cuartas Granada and 2 other authors View PDF HTML (experimental) Abstract:Finite automata (FA) are a fundamental computational abstraction that is widely used in practice for various tasks in computer science, linguistics, biology, electrical engineering, and artificial intelligence. Given an input word, an FA maps the word to a result, in the simple case "accept" or "reject", but in general to one of a finite set of results. A question that then arises is: why? Another question is: how can we modify the input word so that it is no longer accepted? One may think that the automaton itself is an adequate explanation of its behaviour, but automata can be very complex and difficult to make sense of directly. In this work, we investigate how to explain the behaviour of an FA on an input word in terms of the word's characters. In particular, we are interested in minimal explanations: what is the minimal set of input characters that explains the result, and what are the minimal changes needed to alter the result? In this paper, we propose an efficient method to determine all minimal explanations for the behaviour of an FA on a partic...

Related Articles

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED
Machine Learning

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED

The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who hav...

Wired - AI · 9 min ·
Machine Learning

Is google deepmind known to ghost applicants? [D]

Hey sub, I'm sorry if this is a wrong place to ask but I don't see a sub for ML roles separately. I was wondering if deepmind is known to...

Reddit - Machine Learning · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime