Download PDFOpen PDF in browserClassification of Medical Transcriptions with ExplanationsEasyChair Preprint 53617 pages•Date: April 24, 2021AbstractResearchers and developers are using neural networks and deep neural networks widely in many fields. Even though those artificial intelligent models provide high performance, the way they work is not clear and users cannot understand its logic behind a specific decision. That is why we cannot use AI models in real applications in the medical field for example. In this paper, we focused on the importance of providing explanations, provided a brief review about the field of Explainable AI, XAI, and used three different ways to provide explanations for users by doing experiments on a medical-transcriptions dataset. We used the self-explainable decision trees, different neural network models with separate explainers, and lastly, we used bidirectional LSTM model with attentions as explanations. Keyphrases: Attention Model, Deep Neural Network, Explainability with Attentions, Explainable Artificial Intelligence, Self-Explainable Models, explainable model, medical transcription, provide explanation, self explaining model
|