This year the SICSA AI Workshop ReaLX’18: Reasoning, Learning & Explainability in AI will be co-located with the SICSA PhD conference. The workshop shall be held on the 27th Jun 2018 in Aberdeen at Robert Gordon University. For further information please check the event website.
Reasoning, Learning and Explainability are key to AI systems that must interact naturally to support users in decision making. Systems need to be capable of explaining their output. Regulations increasingly support users rights to fair and transparent processing in automated decision-making systems. Addressing this challenge is steadily becoming more urgent as the increasing reliance on learned models in deployed applications continues to be driven by the recent success of deep learning and other data-driven systems.
Though models learned directly from data offer improved accuracy, mapping these concepts to facilitate human reasoning is difficult. In contrast, reasoning systems can offer transparency through logical alignment of representation and reasoning methods to allow the necessary insight into the decision-making process. This is a core principle behind explainability and is critical if we are to use AI with the intent of improving user performance and experience.
The ReaLX’18 workshop will provide a forum to share exciting research on real AI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among SICSA researchers interested in AI. We expect to draw interest from AI researchers working in a number of related areas including NLP, ML, reasoning systems, intelligent user interfaces, conversational AI and adaptive user interfaces, causal modelling, computational analogy, constraint reasoning and cognitive theories of explanation and transparency.
The ReaLX’18 workshop Organisation Committee invites submissions of original, theoretical and applied research on all aspects related to AI with focus on the synergies to be had between reasoning, learning and explainable AI. Example submission areas include, but are not limited to addressing issues such as:
- AI in human-in-the-loop systems
- Practical cons ideration for real AI Applications
- User interfaces for explainable AI
- Mechanisms for explainability
- Generation of post-hoc explanations
- Architecture for interpretable systems
- Evaluating explainable systems
- Evaluation paradigms for explainable algorithms
- Evaluation metrics for explainable algorithms
Paper Submission Instructions
Paper submissions should be formatted according to Springer instructions (see https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines). You have the following submission options:
- Short paper: 5 pages to describe preliminary work, present an overview of existing work or to be accompanied by a demonstration.
- Position paper: Maximum of 2 pages to present an idea, discuss challenges or identify the landscape in this area of research.
Workshop submissions will be subject to review by at least two reviewers who will be members of the Programme Committee. Researchers who submit demo systems will be required to provide access to the software in advance to facilitate evaluation.
To submit a paper, please send a PDF version of your paper to email@example.com with the title "ReaLX Workshop Submission".
Please note that the programme committee will assign the most suitable presentation method (either poster or oral presentation), but if any author should have a preference then please state this in the email.
Accepted short papers will be considered for publication through CEUR. CEUR is a free open-access publications service operated under RWTH Aachen University. For more information, see: http://ceur-ws.org/
Unfortunately, we cannot consider position papers for publication.
Submission Deadline: May 25, 2018
Notification Date: June 8, 2018
Camera-Ready Deadline: June 15, 2018
Workshop Date: June 27th, 2018