Characterization of Learning Models via Contrastive Reasoning and Uncertainty
Abstract: In this talk, I will share our most recent work on providing robust representations of visual data while providing explainability to the neural network models. Model-based characterization of neural networks and visual explanations are logical arguments based on visual features that justify the predictions made by neural networks. Our work has been based on quantifying the model changes per input data. We believe that model responses to data and the interaction between the model and the data gives us a window into the model. This can be utilized to provide a number of capabilities such as explainability, uncertainty quantification, and behavior prediction.