Understandable and Interpretable Machine Learning

Special Session on

Understandable and Interpretable Machine Learning

http://scm.ulster.ac.uk/FLINS2018/

FLINS 2018 is the thirteenth in a series of conferences on computational intelligence systems with focus on Data Science and Knowledge Engineering for Sensing Decision Support. The conference will be held in Belfast, the birthplace of Titanic and the capital city of Northern Ireland, UK.

FLINS 2018 proceedings will be again published as a book by the World Scientific and it will be again included in the ISI proceedings as previous ones as well as be included in EI Compendex for indexing. Moreover, special issues of SCI indexed journals will be devoted to a strictly refereed selection of extended papers presented at FLINS 2018.

Scope and Motivation

Historically, artificial intelligence methods were based on modeling of domain-specific rules. However, this approach has proven to be challenging due to the complexity, dynamics and fuzziness of problem domains. In contrast, the field of machine learning gives computers the ability to learn statistical patterns from data without being explicitly programmed. In recent years, this field has experienced a major increase of popularity that lead to an unprecedented number of success stories. However, as many machine learning methods are black box-like, the interpretation of results is challenging. Furthermore, these black box methods may make unforeseeable decisions for rare or unexpected input values. It is for that reason that a number of domains, such as critical systems, are reluctant to move to machine learning-based decision making methods. In this special session, we will bridge the gap between expert systems and machine learning methods in order to build understandable and interpretable machine learning methods.

The topics of interest of this special session include, but are not limited to:

  • Theoretical advances in understandable and interpretable machine learning
  • Statistical learning theory
  • Interpretation of neural networks and deep learning models
  • Combining expert systems and machine learning methods
  • Fuzzy systems and neuro fuzzy
  • Interpretation of machine learning models
  • Visualization of models, data and predictions
  • Impact of noise in input in computer vision problems
  • Biases in big data sets
  • Applications to critical systems, autonomous driving, etc.


Important Dates

Full paper submission January 30, 2018

Acceptance notification March 20, 2018

Camera-ready paper submission April 10, 2018

Paper Submission

Contacts

Patrick Glauner (patrick.glauner@uni.lu)

Telephone: +352 466 644 5346

Interdisciplinary Centre for Security, Reliability and Trust

University of Luxembourg, Luxembourg, Luxembourg

Radu State (radu.state@uni.lu)

Telephone: +352 466 644 5647

Interdisciplinary Centre for Security, Reliability and Trust

University of Luxembourg, Luxembourg, Luxembourg

Special_Session_on_Understandable_and_Interpretable_Machine_Learning_FLINS_2018.pdf