Special Sessions

Special Session: Explainable Artificial Intelligence (XAI) and Interpretable Machine Learning (IML) for Complex Systems

We are pleased to invite original research contributions to our Special Session on Explainable Artificial Intelligence (XAI) and Interpretable Machine Learning (IML), specifically tailored for the modelling of complex systems using multiple sequential tabular data. As AI moves into critical infrastructure, the demand for transparent and interpretable and simulatable models has never been higher. This session seeks to bridge the gap between black-box predictive power and human-centric understanding, emphasising the integration of theories and methods, e.g., systems theory, system identification, rule-based methods, sparse learning and information-theoretic techniques, to decode intricate system behaviours.

We welcome submissions addressing a wide array of modelling challenges. Whether your work focuses on fault diagnosis in industrial settings or predictive modelling in high-stakes environments, such as weather forecasting, space weather, healthcare, medicine or life sciences, or whether your work involves time-series analysis or input-output systems modelling, this session provides a premier forum for discussing how interpretability enhances reliability. By prioritising models that are as simple as possible but as powerful as necessary, we aim to foster a new generation of AI tools that are not only accurate but fundamentally accountable to the experts who use them.

Topics of interest for submission include, but are not limited to:

    * Methodologies and Algorithms: Ante-hoc (self-interpretable) model design, enhanced post-hoc explanation methods (e.g., SHAP and LIME), and counterfactual explanations.
    * Evaluation and Metrics: Quantitative benchmarks for explanation quality, human-centred evaluation studies, and measuring the "faithfulness" of interpretations.
    * TIPS Model Development: Design of Transparent, Interpretable, Parsimonious, and Simulatable (TIPS) architectures for "white-box" modelling of complex black-box systems.
    * Sequential Tabular Modelling: XAI and IML approaches specifically designed for multiple sequential datasets, including multivariate time-series and longitudinal data.
    * Systems and Information Theory in AI: Application of complex systems modelling, system identification, entropy, mutual information, and feedback control principles to improve the interpretability of machine learning models.
    * Decoding Nonlinear Dynamics: Techniques for extracting understandable rules or symbolic representations from nonlinear input-output systems.
    * Diagnostics and Reliability: Leveraging IML for robust fault diagnosis, anomaly detection, and root-cause analysis in industrial or biological systems.
    * High-Stakes Domain Applications: Case studies in weather and space weather forecasting, environmental monitoring, and clinical decision support in healthcare.
    * Human-in-the-Loop Interpretability: Methods for validating AI transparency with domain experts in life sciences and physical sciences.

Chair: Prof. Hua-Liang Wei, University of Sheffield, UK

Submit Method:
1, submit it via the link: http://confsys.iconf.org/submission/icmlt2026(after entering the link, click on the corresponding topic)
2, send your manuscript to icmlt_conf@163.com with subject "Submit+Special Session-2+Paper Title".

X