Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. Towards Optimising EEG Decoding using Post-hoc Explanations and Domain Knowledge
 
  • Details

Towards Optimising EEG Decoding using Post-hoc Explanations and Domain Knowledge

Source
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society EMBS
ISSN
1557170X
Date Issued
2024-01-01
Author(s)
Rajpura, Param
Meena, Yogesh Kumar  
DOI
10.1109/EMBC53108.2024.10781846
Abstract
Decoding Electoencephalography (EEG) during motor imagery is pivotal for the Brain-Computer Interface (BCI) system, influencing its overall performance significantly. As end-to-end data-driven learning methods advance, the challenge lies in balancing model complexity with the need for human interpretability and trust. Despite strides in EEG-based BCIs, challenges like artefacts and low signal-to-noise ratio emphasise the ongoing importance of model transparency. This work proposes using post-hoc explanations to interpret model outcomes and validate them against domain knowledge. Leveraging the GradCAM post-hoc explanation technique on the EEG motor movement/imagery dataset, this work demonstrates that relying solely on accuracy metrics may be inadequate to ensure BCI performance and acceptability. A model trained using all EEG channels of the dataset achieves 72.60% accuracy, while a model trained with motor-imagery/movement-relevant channel data has a statistically insignificant decrease of 1.75%. However, the relevant features for both are very different based on neurophysiological facts. This work demonstrates that integrating domain-specific knowledge with Explainable AI (XAI) techniques emerges as a promising paradigm for validating the neurophysiological basis of model outcomes in BCIs. Our results reveal the significance of neurophysiological validation in evaluating BCI performance, highlighting the potential risks of exclusively relying on performance metrics when selecting models for dependable and transparent BCIs.
Unpaywall
URI
https://d8.irins.org/handle/IITG2025/29119
Subjects
Brain-Computer Interfaces | EEG | Explainable AI | Motor Imagery
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify