AIPV2026: AI, Proof and Verification Tokyo, Japan, May 18-19, 2026 |
| Conference website | https://aipv2026.github.io/ |
| Submission link | https://easychair.org/conferences/?conf=aipv2026 |
Call for Contributions: 1st Workshop on AI, Proof and Verification (AIPV 2026)
| Collocated with FM 2026 | May 18–19, 2026 | Tokyo, Japan |
Overview
While artificial intelligence (AI) and machine learning (ML) have shown remarkable progress, rigorous reasoning remains a major open challenge. Generative AI often generates invalid arguments and lacks guarantees of soundness. The 1st Workshop on AI, Proof and Verification (AIPV) seeks to bridge the gap between the AI/ML community and the logic, theorem proving, and verification communities. AIPV is a forum to critically assess AI capabilities, share experiences across different logical frameworks, and chart new directions for trustworthy, AI-assisted reasoning.
Topics of Interest
We invite contributions that balance scalability with soundness at the intersection of AI and formal reasoning. Topics include, but are not limited to:
- AI for Theorem Proving: Guiding interactive proof search and improving automation in higher-order logics or dependent type theories.
- AI for Formal Verification: Novel methods for scaling verification in safety-critical and industrial domains.
- Reasoning with LLMs: Assessing and improving the capabilities of LLMs for rigorous symbolic inference and compositional tasks.
- Translation & Representation: Reliable translation between informal and formal representations.
- Benchmarks and Datasets: Development of rigorous testbeds where correctness and explainability are mandatory.
- Interdisciplinary Approaches: Bridging the culture gap between empirical performance and formal rigor.
Submission Types
AIPV is a lightweight and constructive workshop. All submissions will receive a brief relevance/clarity check with a positive and inclusive attitude. Our goal is to enable participation rather than filter it.
We welcome the following types of contributions:
- Previously Accepted / Published Work: For papers already accepted or published elsewhere (FM, CAV, CADE, ITP, IJCAR, AAAI, NeurIPS, ICML, etc.). Please provide a link or preprint/PDF. (Pending copyright constraints; see note below.)
- Full Papers: Up to 8 – 12 pages. Suitable for mature research.
- Extended Abstracts: 1 – 2 pages. Ideal for ongoing projects, early results, or exploratory work.
- Position / Vision Papers: 1 – 2 pages. Conceptual, speculative, or agenda-setting contributions.
- Tool Demos / Benchmarks / Data Resources: 1 – 4 pages. For systems, datasets, benchmarks, evaluation frameworks, or reproducibility insights.
Submissions will be evaluated by a program committee with a constructive attitude. Accepted works may be presented as full talks, lightning talks, or posters.
Sharing and Availability Policy
We plan to place submitted PDF files (or non-confidential versions) on the workshop website so that participants can access them before and during the event. However, we understand that some authors may have constraints due to publisher policies or project-specific considerations. If you prefer not to have your PDF posted publicly, or would like limited visibility (e.g., “participants only”), please let us know — we are happy to accommodate.
For previously published or accepted work, please ensure that sharing a PDF or preprint link does not conflict with the policies of the original venue. If needed, you may instead provide a short extended abstract for our website.
If you have any concerns, feel free to contact us; we will do our best to find a suitable arrangement.
Important Dates
- Workshop Paper Submission: March 15, 2026 (23:59 AoE)
- Notification of Acceptance: April 15, 2026 (23:59 AoE)
- Camera-ready Submission: April 30, 2026 (23:59 AoE)
- Workshop Date: One day between May 18–19, 2026
Organization
- Yutaka Nagashima (Czech Academy of Sciences)
- Jonathan Julián Huerta y Munive (Aalborg University)
- Andreea Costea (Delft University of Technology)
- Stefan Ratschan (Czech Academy of Sciences)
