Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Secure Object Detection on the Edge
Ling Liu, Georgia Institute of Technology, United States

Interpretability and Explainability Facets of Data Analytics: Symbols and Information Granules
Witold Pedrycz, University of Alberta, Canada

 

Secure Object Detection on the Edge

Ling Liu
Georgia Institute of Technology
United States
 

Brief Bio
Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in the Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large scale big data-powered artificial intelligence (AI) systems, and machine learning (ML) algorithms and analytics, including performance, availability, privacy, security and trust. Prof. Liu is an elected IEEE Fellow, a recipient of IEEE Computer Society Technical Achievement Award (2012), and a recipient of the best paper award from numerous top venues, including IEEE ICDCS, WWW, ACM/IEEE CCGrid, IEEE Cloud, IEEE ICWS. Prof. Liu served on editorial board of over a dozen international journals, including the editor in chief of IEEE Transactions on Service Computing (2013-2016) and currently, the editor in chief of ACM Transactions on Internet Computing (TOIT). Prof. Liu is a frequent keynote speaker in top-tier venues in Big Data, AI and ML systems and applications, Cloud Computing, Services Computing, Privacy, Security and Trust. Her current research is primarily supported by USA National Science Foundation under CISE programs and IBM.


Abstract
Deep neural networks (DNNs) have fueled the wide deployment of object detection models in a number of mission-critical domains, such as traffic sign detection on autonomous vehicles, and intrusion detection on surveillance systems. Recent studies have revealed that deep object detectors can also be compromised under adversarial attacks, causing a victim detector to detect no object, fake objects, or wrong objects. However, very few studies how to guarantee the robustness of object detection against adversarial manipulations. This keynote presents an in-depth understanding of vulnerabilities of deep object detection systems by analyzing the adversarial robustness  under different DNN detector training algorithms, different attack strategies, different adverse effects and costs. Then I will describe a set of mitigation strategies and techniques for robust object detection by guaranteeing high adversarial robustness while maintaining high benign detection accuracy.



 

 

Interpretability and Explainability Facets of Data Analytics: Symbols and Information Granules

Witold Pedrycz
University of Alberta
Canada
 

Brief Bio

Witold Pedrycz (IEEE Fellow, 1998) is Professor and Canada Research Chair (CRC) in Computational Intelligence in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. In 2009 Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. In 2012 he was elected a Fellow of the Royal Society of Canada. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Society. He is a recipient of the IEEE Canada Computer Engineering Medal, a Cajastur Prize for Soft Computing from the European Centre for Soft Computing, a Killam Prize, a Fuzzy Pioneer Award from the IEEE Computational Intelligence Society, and 2019 Meritorious Service Award from the IEEE Systems Man and Cybernetics Society. 

His main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data science, pattern recognition, data science, knowledge-based neural networks, and control engineering. He has published papers in these areas. He is also an author of 21 research monographs and edited volumes covering various aspects of Computational Intelligence, data mining, and Software Engineering. 

Dr. Pedrycz is vigorously involved in editorial activities. He is an Editor-in-Chief of Information Sciences, Editor-in-Chief of WIREs Data Mining and Knowledge Discovery (Wiley), and Co-editor-in-Chief of Int. J. of Granular Computing (Springer) and J. of Data Information and Management (Springer).  He serves on an Advisory Board of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of international journals. 


Abstract
In data analytics, system modeling, and decision-making models, the aspects of interpretability and explainability are of paramount relevance, just to mention only explainable Artificial Intelligence (XAI). They are especially timely in light of the increasing complexity of systems one has to cope with.

We advocate that there are two factors that immensely contribute to the realization of the above important features, namely, a suitable level of abstraction in describing the problem and a logic fabric of the resultant construct. It is demonstrated that their conceptualization and the following realization can be conveniently carried out with the use of information granules (for example, fuzzy sets, sets, rough sets, and alike).
 
Concepts are building blocks forming the interpretable environment capturing the essence of data and key relationships existing there. The emergence of concepts is supported by a systematic and focused analysis of data. At the same time, their initialization is specified by stakeholders or/and the owners and users of data.   We present a comprehensive discussion of information granules-oriented design of concepts and their description by engaging an innovative mechanism of conditional (concept)-driven clustering. It is shown that the initial phase of the process is guided by the formulation of some generic (say, low profit) or some complex multidimensional concepts (say, poor quality of environment or high stability of network traffic) all of which are described by means of some information granules. In the sequel is explained by other variables through clustering focuses by the context. The description of concepts is delivered by a logic expression whose calibration is completed by a detailed learning of the associated logic neural network. The constructed network helps quantify contributions of individual information granules to the description of the underlying concept and facilitate a more qualitative characterization achieved with the aid of linguistic approximation. This form of approximation delivers a concise and interpretable abstract description through linguistic quantifiers. 
A detailed example of enhancement of interpretability of functional rule-based models with the rules in the form “if x is A then y =f(x)”. The interpretability mechanisms are focused on the elevation of interpretability of the conditions and conclusions of the rules. It is shown that augmenting interpretability of conditions is achieved by (i) decomposing a multivariable information granule into its one-dimensional components, (ii) their symbolic characterization, and (iii) linguistic approximation. A hierarchy of interpretation mechanisms is systematically established. We also discuss how this increased interpretability associates with the reduced accuracy of the rules and how sound trade-offs between these features are formed.



footer